Test Report: Docker_Linux 21223

                    
                      484a1af5b273601f72ebe358add6bfaeab0cd477:2025-08-04:40792
                    
                

Test fail (36/431)

Order failed test Duration
173 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/StartWithProxy 519.7
175 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/SoftStart 369.97
177 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/KubectlGetPods 1.74
187 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/MinikubeKubectlCmd 1.8
188 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/MinikubeKubectlCmdDirectly 1.8
189 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/ExtraConfig 742.89
190 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/ComponentHealth 1.59
193 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/InvalidService 0.06
196 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DashboardCmd 1.78
199 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/StatusCmd 2.88
203 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmdConnect 1.52
205 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim 241.49
209 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MySQL 1.31
215 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/NodeLabels 1.67
220 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/DeployApp 0.07
221 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/List 0.26
222 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/JSONOutput 0.27
226 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MountCmd/any-port 2.43
227 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/HTTPS 0.32
229 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/Format 0.27
231 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/URL 0.26
243 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DockerEnv/bash 0.62
248 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.27
252 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.06
253 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 116.36
367 TestKubernetesUpgrade 805.99
428 TestStartStop/group/no-preload/serial/FirstStart 523.07
436 TestStartStop/group/newest-cni/serial/FirstStart 505.9
480 TestStartStop/group/no-preload/serial/DeployApp 0.62
481 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 95.84
495 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 94.65
499 TestStartStop/group/no-preload/serial/SecondStart 371.76
512 TestStartStop/group/newest-cni/serial/SecondStart 254.98
516 TestStartStop/group/newest-cni/serial/Pause 26.45
517 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.2
518 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 267.35
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/StartWithProxy (519.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-699837 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0
E0804 08:46:47.353392 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:49:03.491491 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:49:31.201887 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:50:41.685369 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:50:41.691720 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:50:41.703007 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:50:41.724323 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:50:41.765670 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:50:41.847089 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:50:42.008623 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:50:42.330351 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:50:42.972392 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:50:44.254129 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:50:46.816978 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:50:51.938491 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:51:02.180422 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:51:22.661786 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:52:03.623930 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:53:25.545446 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:54:03.491925 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2251: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-699837 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0: exit status 80 (8m39.407126006s)

                                                
                                                
-- stdout --
	* [functional-699837] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-699837" primary control-plane node in "functional-699837" cluster
	* Pulling base image v0.0.47-1753871403-21198 ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...* Found network options:
	  - HTTP_PROXY=localhost:38447
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...  - Generating certificates and keys ...  - Booting up control plane ...  - Generating certificates and keys ...  - Booting up control plane ...
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:38447 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:38447 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-699837 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-699837 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001773586s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.545633246s
	[control-plane-check] kube-scheduler is healthy after 33.512654334s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000406592s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.731399ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 3.505015094s
	[control-plane-check] kube-scheduler is healthy after 33.41794123s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000473142s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.731399ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 3.505015094s
	[control-plane-check] kube-scheduler is healthy after 33.41794123s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000473142s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2253: failed minikube start. args "out/minikube-linux-amd64 start -p functional-699837 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-699837
helpers_test.go:235: (dbg) docker inspect functional-699837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	        "Created": "2025-08-04T08:46:45.45274172Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1645232,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T08:46:45.480784715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hosts",
	        "LogPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef-json.log",
	        "Name": "/functional-699837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-699837:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-699837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	                "LowerDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/merged",
	                "UpperDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/diff",
	                "WorkDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-699837",
	                "Source": "/var/lib/docker/volumes/functional-699837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-699837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-699837",
	                "name.minikube.sigs.k8s.io": "functional-699837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "28a81d3856c88da8c1d30d5c1cccd74ba2a899c3397b78caf0ac9da484142038",
	            "SandboxKey": "/var/run/docker/netns/28a81d3856c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-699837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:c5:9a:18:f2:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "763070d9e7bba0803db69bf71eb608d56921d0bfd4c71a1d39d0701f7372b87c",
	                    "EndpointID": "83493e8c17b59326d8c479c2c0d7a5ded2cae3362a881c1ce8347b3f751ead15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-699837",
	                        "c369b96e23d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837: exit status 6 (266.501474ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 08:55:20.712428 1653572 status.go:458] kubeconfig endpoint: get endpoint: "functional-699837" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-699837" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/StartWithProxy (519.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/SoftStart (369.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/SoftStart
I0804 08:55:20.727969 1582690 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-699837 --alsologtostderr -v=8
E0804 08:55:41.678286 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:56:09.389211 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:59:03.491281 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:00:26.563280 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:00:41.677902 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:676: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-699837 --alsologtostderr -v=8: exit status 80 (6m7.913150445s)

                                                
                                                
-- stdout --
	* [functional-699837] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-699837" primary control-plane node in "functional-699837" cluster
	* Pulling base image v0.0.47-1753871403-21198 ...
	* Updating the running docker "functional-699837" container ...
	* Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 08:55:20.770600 1653676 out.go:345] Setting OutFile to fd 1 ...
	I0804 08:55:20.770872 1653676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:55:20.770883 1653676 out.go:358] Setting ErrFile to fd 2...
	I0804 08:55:20.770890 1653676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:55:20.771067 1653676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 08:55:20.771644 1653676 out.go:352] Setting JSON to false
	I0804 08:55:20.772653 1653676 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":149810,"bootTime":1754147911,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 08:55:20.772739 1653676 start.go:140] virtualization: kvm guest
	I0804 08:55:20.774597 1653676 out.go:177] * [functional-699837] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 08:55:20.775675 1653676 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 08:55:20.775678 1653676 notify.go:220] Checking for updates...
	I0804 08:55:20.776705 1653676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 08:55:20.777818 1653676 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:20.778845 1653676 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 08:55:20.779811 1653676 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 08:55:20.780885 1653676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 08:55:20.782127 1653676 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 08:55:20.782240 1653676 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 08:55:20.804704 1653676 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 08:55:20.804841 1653676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 08:55:20.850605 1653676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 08:55:20.841828701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 08:55:20.850698 1653676 docker.go:318] overlay module found
	I0804 08:55:20.852305 1653676 out.go:177] * Using the docker driver based on existing profile
	I0804 08:55:20.853166 1653676 start.go:304] selected driver: docker
	I0804 08:55:20.853179 1653676 start.go:918] validating driver "docker" against &{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 08:55:20.853275 1653676 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 08:55:20.853364 1653676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 08:55:20.899900 1653676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 08:55:20.891412564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 08:55:20.900590 1653676 cni.go:84] Creating CNI manager for ""
	I0804 08:55:20.900687 1653676 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 08:55:20.900743 1653676 start.go:348] cluster config:
	{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 08:55:20.902216 1653676 out.go:177] * Starting "functional-699837" primary control-plane node in "functional-699837" cluster
	I0804 08:55:20.903155 1653676 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 08:55:20.904009 1653676 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 08:55:20.904940 1653676 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 08:55:20.904978 1653676 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 08:55:20.904991 1653676 cache.go:56] Caching tarball of preloaded images
	I0804 08:55:20.905036 1653676 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 08:55:20.905069 1653676 preload.go:172] Found /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 08:55:20.905079 1653676 cache.go:59] Finished verifying existence of preloaded tar for v1.34.0-beta.0 on docker
	I0804 08:55:20.905203 1653676 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/config.json ...
	I0804 08:55:20.923511 1653676 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 08:55:20.923529 1653676 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 08:55:20.923544 1653676 cache.go:230] Successfully downloaded all kic artifacts
	I0804 08:55:20.923577 1653676 start.go:360] acquireMachinesLock for functional-699837: {Name:mkeddb8e244284f14cfc07327f464823de65cf67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 08:55:20.923631 1653676 start.go:364] duration metric: took 36.633µs to acquireMachinesLock for "functional-699837"
	I0804 08:55:20.923647 1653676 start.go:96] Skipping create...Using existing machine configuration
	I0804 08:55:20.923652 1653676 fix.go:54] fixHost starting: 
	I0804 08:55:20.923842 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:20.940410 1653676 fix.go:112] recreateIfNeeded on functional-699837: state=Running err=<nil>
	W0804 08:55:20.940440 1653676 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 08:55:20.942107 1653676 out.go:177] * Updating the running docker "functional-699837" container ...
	I0804 08:55:20.943161 1653676 machine.go:93] provisionDockerMachine start ...
	I0804 08:55:20.943249 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:20.959620 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:20.959871 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:20.959884 1653676 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 08:55:21.080396 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-699837
	
	I0804 08:55:21.080433 1653676 ubuntu.go:169] provisioning hostname "functional-699837"
	I0804 08:55:21.080500 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.097426 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.097649 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.097666 1653676 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-699837 && echo "functional-699837" | sudo tee /etc/hostname
	I0804 08:55:21.227825 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-699837
	
	I0804 08:55:21.227926 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.246066 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.246278 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.246294 1653676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-699837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-699837/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-699837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 08:55:21.373154 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 08:55:21.373185 1653676 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 08:55:21.373228 1653676 ubuntu.go:177] setting up certificates
	I0804 08:55:21.373273 1653676 provision.go:84] configureAuth start
	I0804 08:55:21.373335 1653676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-699837
	I0804 08:55:21.390471 1653676 provision.go:143] copyHostCerts
	I0804 08:55:21.390507 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 08:55:21.390548 1653676 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 08:55:21.390558 1653676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 08:55:21.390632 1653676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 08:55:21.390734 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 08:55:21.390760 1653676 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 08:55:21.390767 1653676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 08:55:21.390803 1653676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 08:55:21.390876 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 08:55:21.390902 1653676 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 08:55:21.390914 1653676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 08:55:21.390947 1653676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 08:55:21.391030 1653676 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.functional-699837 san=[127.0.0.1 192.168.49.2 functional-699837 localhost minikube]
	I0804 08:55:21.573518 1653676 provision.go:177] copyRemoteCerts
	I0804 08:55:21.573582 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 08:55:21.573618 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.591269 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:21.681513 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 08:55:21.681585 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 08:55:21.702708 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 08:55:21.702758 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 08:55:21.723583 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 08:55:21.723630 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 08:55:21.744569 1653676 provision.go:87] duration metric: took 371.27679ms to configureAuth
	I0804 08:55:21.744602 1653676 ubuntu.go:193] setting minikube options for container-runtime
	I0804 08:55:21.744799 1653676 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 08:55:21.744861 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.762017 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.762244 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.762255 1653676 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 08:55:21.889470 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 08:55:21.889494 1653676 ubuntu.go:71] root file system type: overlay
	I0804 08:55:21.889614 1653676 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 08:55:21.889686 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.906485 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.906734 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.906827 1653676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 08:55:22.043972 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 08:55:22.044042 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.061528 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:22.061801 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:22.061820 1653676 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 08:55:22.189999 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 08:55:22.190024 1653676 machine.go:96] duration metric: took 1.246850112s to provisionDockerMachine
	I0804 08:55:22.190035 1653676 start.go:293] postStartSetup for "functional-699837" (driver="docker")
	I0804 08:55:22.190046 1653676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 08:55:22.190105 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 08:55:22.190157 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.207121 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.297799 1653676 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 08:55:22.300559 1653676 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.5 LTS"
	I0804 08:55:22.300580 1653676 command_runner.go:130] > NAME="Ubuntu"
	I0804 08:55:22.300588 1653676 command_runner.go:130] > VERSION_ID="22.04"
	I0804 08:55:22.300596 1653676 command_runner.go:130] > VERSION="22.04.5 LTS (Jammy Jellyfish)"
	I0804 08:55:22.300602 1653676 command_runner.go:130] > VERSION_CODENAME=jammy
	I0804 08:55:22.300608 1653676 command_runner.go:130] > ID=ubuntu
	I0804 08:55:22.300614 1653676 command_runner.go:130] > ID_LIKE=debian
	I0804 08:55:22.300622 1653676 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0804 08:55:22.300634 1653676 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0804 08:55:22.300652 1653676 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0804 08:55:22.300662 1653676 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0804 08:55:22.300667 1653676 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0804 08:55:22.300719 1653676 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 08:55:22.300753 1653676 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 08:55:22.300768 1653676 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 08:55:22.300780 1653676 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 08:55:22.300795 1653676 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 08:55:22.300857 1653676 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 08:55:22.300964 1653676 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 08:55:22.300977 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> /etc/ssl/certs/15826902.pem
	I0804 08:55:22.301064 1653676 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts -> hosts in /etc/test/nested/copy/1582690
	I0804 08:55:22.301073 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts -> /etc/test/nested/copy/1582690/hosts
	I0804 08:55:22.301115 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1582690
	I0804 08:55:22.308734 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 08:55:22.329778 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts --> /etc/test/nested/copy/1582690/hosts (40 bytes)
	I0804 08:55:22.350435 1653676 start.go:296] duration metric: took 160.385758ms for postStartSetup
	I0804 08:55:22.350534 1653676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 08:55:22.350588 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.367129 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.453443 1653676 command_runner.go:130] > 33%
	I0804 08:55:22.453718 1653676 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 08:55:22.457863 1653676 command_runner.go:130] > 197G
	I0804 08:55:22.457888 1653676 fix.go:56] duration metric: took 1.534232726s for fixHost
	I0804 08:55:22.457898 1653676 start.go:83] releasing machines lock for "functional-699837", held for 1.534258328s
	I0804 08:55:22.457964 1653676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-699837
	I0804 08:55:22.474710 1653676 ssh_runner.go:195] Run: cat /version.json
	I0804 08:55:22.474768 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.474834 1653676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 08:55:22.474905 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.492489 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.492983 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.576302 1653676 command_runner.go:130] > {"iso_version": "v1.36.0-1753487480-21147", "kicbase_version": "v0.0.47-1753871403-21198", "minikube_version": "v1.36.0", "commit": "69470231e9abd2d11a84a83b271e426458d5d12f"}
	I0804 08:55:22.576422 1653676 ssh_runner.go:195] Run: systemctl --version
	I0804 08:55:22.653754 1653676 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0804 08:55:22.655827 1653676 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.16)
	I0804 08:55:22.655870 1653676 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0804 08:55:22.655949 1653676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 08:55:22.659872 1653676 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0804 08:55:22.659895 1653676 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I0804 08:55:22.659905 1653676 command_runner.go:130] > Device: 37h/55d	Inode: 822247      Links: 1
	I0804 08:55:22.659914 1653676 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0804 08:55:22.659929 1653676 command_runner.go:130] > Access: 2025-08-04 08:46:48.521872821 +0000
	I0804 08:55:22.659937 1653676 command_runner.go:130] > Modify: 2025-08-04 08:46:48.497871149 +0000
	I0804 08:55:22.659947 1653676 command_runner.go:130] > Change: 2025-08-04 08:46:48.497871149 +0000
	I0804 08:55:22.659959 1653676 command_runner.go:130] >  Birth: 2025-08-04 08:46:48.497871149 +0000
	I0804 08:55:22.660164 1653676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 08:55:22.676431 1653676 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 08:55:22.676489 1653676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 08:55:22.683904 1653676 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 08:55:22.683925 1653676 start.go:495] detecting cgroup driver to use...
	I0804 08:55:22.683957 1653676 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 08:55:22.684079 1653676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 08:55:22.696848 1653676 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0804 08:55:22.698010 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:23.084233 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 08:55:23.094208 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 08:55:23.103030 1653676 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 08:55:23.103076 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 08:55:23.111645 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 08:55:23.120216 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 08:55:23.128524 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 08:55:23.137020 1653676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 08:55:23.144932 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 08:55:23.153318 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 08:55:23.161730 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 08:55:23.170124 1653676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 08:55:23.176419 1653676 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0804 08:55:23.177058 1653676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 08:55:23.184211 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:23.265466 1653676 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 08:55:23.467281 1653676 start.go:495] detecting cgroup driver to use...
	I0804 08:55:23.467337 1653676 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 08:55:23.467388 1653676 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 08:55:23.477772 1653676 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0804 08:55:23.477865 1653676 command_runner.go:130] > [Unit]
	I0804 08:55:23.477892 1653676 command_runner.go:130] > Description=Docker Application Container Engine
	I0804 08:55:23.477904 1653676 command_runner.go:130] > Documentation=https://docs.docker.com
	I0804 08:55:23.477912 1653676 command_runner.go:130] > BindsTo=containerd.service
	I0804 08:55:23.477924 1653676 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0804 08:55:23.477935 1653676 command_runner.go:130] > Wants=network-online.target
	I0804 08:55:23.477942 1653676 command_runner.go:130] > Requires=docker.socket
	I0804 08:55:23.477950 1653676 command_runner.go:130] > StartLimitBurst=3
	I0804 08:55:23.477958 1653676 command_runner.go:130] > StartLimitIntervalSec=60
	I0804 08:55:23.477963 1653676 command_runner.go:130] > [Service]
	I0804 08:55:23.477971 1653676 command_runner.go:130] > Type=notify
	I0804 08:55:23.477977 1653676 command_runner.go:130] > Restart=on-failure
	I0804 08:55:23.477992 1653676 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0804 08:55:23.478010 1653676 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0804 08:55:23.478023 1653676 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0804 08:55:23.478048 1653676 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0804 08:55:23.478062 1653676 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0804 08:55:23.478073 1653676 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0804 08:55:23.478088 1653676 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0804 08:55:23.478104 1653676 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0804 08:55:23.478125 1653676 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0804 08:55:23.478140 1653676 command_runner.go:130] > ExecStart=
	I0804 08:55:23.478162 1653676 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0804 08:55:23.478451 1653676 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0804 08:55:23.478489 1653676 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0804 08:55:23.478505 1653676 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0804 08:55:23.478520 1653676 command_runner.go:130] > LimitNOFILE=infinity
	I0804 08:55:23.478529 1653676 command_runner.go:130] > LimitNPROC=infinity
	I0804 08:55:23.478536 1653676 command_runner.go:130] > LimitCORE=infinity
	I0804 08:55:23.478544 1653676 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0804 08:55:23.478559 1653676 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0804 08:55:23.478570 1653676 command_runner.go:130] > TasksMax=infinity
	I0804 08:55:23.478576 1653676 command_runner.go:130] > TimeoutStartSec=0
	I0804 08:55:23.478586 1653676 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0804 08:55:23.478592 1653676 command_runner.go:130] > Delegate=yes
	I0804 08:55:23.478606 1653676 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0804 08:55:23.478612 1653676 command_runner.go:130] > KillMode=process
	I0804 08:55:23.478659 1653676 command_runner.go:130] > [Install]
	I0804 08:55:23.478680 1653676 command_runner.go:130] > WantedBy=multi-user.target
	I0804 08:55:23.480586 1653676 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 08:55:23.480654 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 08:55:23.491375 1653676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 08:55:23.505761 1653676 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0804 08:55:23.506806 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:23.923432 1653676 ssh_runner.go:195] Run: which cri-dockerd
	I0804 08:55:23.926961 1653676 command_runner.go:130] > /usr/bin/cri-dockerd
	I0804 08:55:23.927156 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 08:55:23.935149 1653676 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 08:55:23.950832 1653676 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 08:55:24.042992 1653676 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 08:55:24.297851 1653676 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 08:55:24.297998 1653676 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 08:55:24.377001 1653676 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 08:55:24.388783 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:24.510366 1653676 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 08:55:24.982429 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 08:55:24.992600 1653676 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0804 08:55:25.006985 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 08:55:25.016432 1653676 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 08:55:25.099651 1653676 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 08:55:25.175485 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:25.251241 1653676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 08:55:25.263161 1653676 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 08:55:25.272497 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:25.348098 1653676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 08:55:25.408736 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 08:55:25.419584 1653676 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 08:55:25.419655 1653676 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 08:55:25.422672 1653676 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0804 08:55:25.422693 1653676 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0804 08:55:25.422702 1653676 command_runner.go:130] > Device: 45h/69d	Inode: 1258        Links: 1
	I0804 08:55:25.422711 1653676 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0804 08:55:25.422722 1653676 command_runner.go:130] > Access: 2025-08-04 08:55:25.353889433 +0000
	I0804 08:55:25.422730 1653676 command_runner.go:130] > Modify: 2025-08-04 08:55:25.353889433 +0000
	I0804 08:55:25.422743 1653676 command_runner.go:130] > Change: 2025-08-04 08:55:25.357889711 +0000
	I0804 08:55:25.422749 1653676 command_runner.go:130] >  Birth: -
	I0804 08:55:25.422776 1653676 start.go:563] Will wait 60s for crictl version
	I0804 08:55:25.422814 1653676 ssh_runner.go:195] Run: which crictl
	I0804 08:55:25.425611 1653676 command_runner.go:130] > /usr/bin/crictl
	I0804 08:55:25.425730 1653676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 08:55:25.455697 1653676 command_runner.go:130] > Version:  0.1.0
	I0804 08:55:25.455721 1653676 command_runner.go:130] > RuntimeName:  docker
	I0804 08:55:25.455727 1653676 command_runner.go:130] > RuntimeVersion:  28.3.3
	I0804 08:55:25.455733 1653676 command_runner.go:130] > RuntimeApiVersion:  v1
	I0804 08:55:25.458002 1653676 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 08:55:25.458069 1653676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 08:55:25.480067 1653676 command_runner.go:130] > 28.3.3
	I0804 08:55:25.481564 1653676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 08:55:25.502625 1653676 command_runner.go:130] > 28.3.3
	I0804 08:55:25.506722 1653676 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 08:55:25.506807 1653676 cli_runner.go:164] Run: docker network inspect functional-699837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 08:55:25.523376 1653676 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0804 08:55:25.526929 1653676 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I0804 08:55:25.527043 1653676 kubeadm.go:875] updating cluster {Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 08:55:25.527223 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:25.922076 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:26.309911 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:26.726305 1653676 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 08:55:26.726461 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:27.101061 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:27.477147 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:27.859614 1653676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 08:55:27.878541 1653676 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	I0804 08:55:27.878563 1653676 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	I0804 08:55:27.878570 1653676 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	I0804 08:55:27.878580 1653676 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.34.0-beta.0
	I0804 08:55:27.878585 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.6.1-1
	I0804 08:55:27.878590 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.5.21-0
	I0804 08:55:27.878595 1653676 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.12.1
	I0804 08:55:27.878599 1653676 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0804 08:55:27.878603 1653676 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 08:55:27.879821 1653676 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 08:55:27.879847 1653676 docker.go:633] Images already preloaded, skipping extraction
	I0804 08:55:27.879906 1653676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 08:55:27.898058 1653676 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	I0804 08:55:27.898084 1653676 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	I0804 08:55:27.898091 1653676 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	I0804 08:55:27.898095 1653676 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.34.0-beta.0
	I0804 08:55:27.898099 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.6.1-1
	I0804 08:55:27.898103 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.5.21-0
	I0804 08:55:27.898109 1653676 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.12.1
	I0804 08:55:27.898113 1653676 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0804 08:55:27.898117 1653676 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 08:55:27.898143 1653676 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 08:55:27.898167 1653676 cache_images.go:85] Images are preloaded, skipping loading
	I0804 08:55:27.898180 1653676 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0-beta.0 docker true true} ...
	I0804 08:55:27.898290 1653676 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-699837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 08:55:27.898340 1653676 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 08:55:27.944494 1653676 command_runner.go:130] > cgroupfs
	I0804 08:55:27.946023 1653676 cni.go:84] Creating CNI manager for ""
	I0804 08:55:27.946045 1653676 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 08:55:27.946061 1653676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 08:55:27.946082 1653676 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-699837 NodeName:functional-699837 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 08:55:27.946247 1653676 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-699837"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 08:55:27.946320 1653676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 08:55:27.953892 1653676 command_runner.go:130] > kubeadm
	I0804 08:55:27.953910 1653676 command_runner.go:130] > kubectl
	I0804 08:55:27.953915 1653676 command_runner.go:130] > kubelet
	I0804 08:55:27.954677 1653676 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 08:55:27.954730 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 08:55:27.962553 1653676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0804 08:55:27.978365 1653676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 08:55:27.994068 1653676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0804 08:55:28.009976 1653676 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0804 08:55:28.013276 1653676 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0804 08:55:28.013353 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:28.101449 1653676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 08:55:28.112250 1653676 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837 for IP: 192.168.49.2
	I0804 08:55:28.112270 1653676 certs.go:194] generating shared ca certs ...
	I0804 08:55:28.112291 1653676 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.112464 1653676 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 08:55:28.112506 1653676 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 08:55:28.112516 1653676 certs.go:256] generating profile certs ...
	I0804 08:55:28.112631 1653676 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.key
	I0804 08:55:28.112686 1653676 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key.5971bdc2
	I0804 08:55:28.112722 1653676 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key
	I0804 08:55:28.112733 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 08:55:28.112747 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 08:55:28.112759 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 08:55:28.112772 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 08:55:28.112783 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 08:55:28.112795 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 08:55:28.112808 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 08:55:28.112819 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 08:55:28.112866 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 08:55:28.112898 1653676 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 08:55:28.112907 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 08:55:28.112929 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 08:55:28.112954 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 08:55:28.112975 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 08:55:28.113011 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 08:55:28.113036 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.113051 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.113068 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem -> /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.113660 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 08:55:28.135009 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 08:55:28.155784 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 08:55:28.176520 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 08:55:28.197558 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 08:55:28.218349 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 08:55:28.239391 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 08:55:28.259973 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 08:55:28.280899 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 08:55:28.301872 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 08:55:28.322816 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 08:55:28.343561 1653676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 08:55:28.359122 1653676 ssh_runner.go:195] Run: openssl version
	I0804 08:55:28.363884 1653676 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0804 08:55:28.364128 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 08:55:28.372266 1653676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.375320 1653676 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.375365 1653676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.375402 1653676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.381281 1653676 command_runner.go:130] > b5213941
	I0804 08:55:28.381530 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 08:55:28.388997 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 08:55:28.397048 1653676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.399946 1653676 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.399991 1653676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.400016 1653676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.406052 1653676 command_runner.go:130] > 51391683
	I0804 08:55:28.406304 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 08:55:28.413987 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 08:55:28.422286 1653676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.425317 1653676 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.425349 1653676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.425376 1653676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.431562 1653676 command_runner.go:130] > 3ec20f2e
	I0804 08:55:28.431844 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 08:55:28.439543 1653676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 08:55:28.442556 1653676 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 08:55:28.442581 1653676 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0804 08:55:28.442590 1653676 command_runner.go:130] > Device: 801h/2049d	Inode: 822354      Links: 1
	I0804 08:55:28.442597 1653676 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0804 08:55:28.442603 1653676 command_runner.go:130] > Access: 2025-08-04 08:51:18.188665144 +0000
	I0804 08:55:28.442607 1653676 command_runner.go:130] > Modify: 2025-08-04 08:47:12.683556584 +0000
	I0804 08:55:28.442614 1653676 command_runner.go:130] > Change: 2025-08-04 08:47:12.683556584 +0000
	I0804 08:55:28.442619 1653676 command_runner.go:130] >  Birth: 2025-08-04 08:47:12.683556584 +0000
	I0804 08:55:28.442691 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 08:55:28.448546 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.448806 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 08:55:28.454608 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.454889 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 08:55:28.460580 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.460805 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 08:55:28.466615 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.466839 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 08:55:28.472661 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.472705 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 08:55:28.478445 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.478508 1653676 kubeadm.go:392] StartCluster: {Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 08:55:28.478619 1653676 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 08:55:28.496419 1653676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 08:55:28.503804 1653676 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0804 08:55:28.503825 1653676 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0804 08:55:28.503833 1653676 command_runner.go:130] > /var/lib/minikube/etcd:
	I0804 08:55:28.504531 1653676 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 08:55:28.504546 1653676 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0804 08:55:28.504584 1653676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 08:55:28.511980 1653676 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 08:55:28.512384 1653676 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-699837" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.512513 1653676 kubeconfig.go:62] /home/jenkins/minikube-integration/21223-1578987/kubeconfig needs updating (will repair): [kubeconfig missing "functional-699837" cluster setting kubeconfig missing "functional-699837" context setting]
	I0804 08:55:28.512791 1653676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.513199 1653676 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.513384 1653676 kapi.go:59] client config for functional-699837: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt", KeyFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.key", CAFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2595680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0804 08:55:28.513811 1653676 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0804 08:55:28.513826 1653676 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0804 08:55:28.513833 1653676 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0804 08:55:28.513839 1653676 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0804 08:55:28.513844 1653676 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0804 08:55:28.513876 1653676 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0804 08:55:28.514257 1653676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 08:55:28.521605 1653676 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0804 08:55:28.521634 1653676 kubeadm.go:593] duration metric: took 17.082556ms to restartPrimaryControlPlane
	I0804 08:55:28.521645 1653676 kubeadm.go:394] duration metric: took 43.142663ms to StartCluster
	I0804 08:55:28.521666 1653676 settings.go:142] acquiring lock: {Name:mk3d97f9903fe59355ed92bb92489c9b9834574a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.521736 1653676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.522230 1653676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.522435 1653676 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 08:55:28.522512 1653676 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 08:55:28.522651 1653676 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 08:55:28.522656 1653676 addons.go:69] Setting storage-provisioner=true in profile "functional-699837"
	I0804 08:55:28.522728 1653676 addons.go:238] Setting addon storage-provisioner=true in "functional-699837"
	I0804 08:55:28.522681 1653676 addons.go:69] Setting default-storageclass=true in profile "functional-699837"
	I0804 08:55:28.522800 1653676 host.go:66] Checking if "functional-699837" exists ...
	I0804 08:55:28.522810 1653676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-699837"
	I0804 08:55:28.523050 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:28.523236 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:28.524415 1653676 out.go:177] * Verifying Kubernetes components...
	I0804 08:55:28.525459 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:28.542729 1653676 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.542941 1653676 kapi.go:59] client config for functional-699837: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt", KeyFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.key", CAFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2595680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0804 08:55:28.543225 1653676 addons.go:238] Setting addon default-storageclass=true in "functional-699837"
	I0804 08:55:28.543255 1653676 host.go:66] Checking if "functional-699837" exists ...
	I0804 08:55:28.543552 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:28.543853 1653676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 08:55:28.545053 1653676 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:28.545072 1653676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 08:55:28.545126 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:28.560950 1653676 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:28.560976 1653676 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 08:55:28.561028 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:28.561396 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:28.582841 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:28.617980 1653676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 08:55:28.628515 1653676 node_ready.go:35] waiting up to 6m0s for node "functional-699837" to be "Ready" ...
	I0804 08:55:28.628655 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:28.628715 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:28.628984 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:28.669259 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:28.681042 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:28.723292 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:28.723334 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.723359 1653676 retry.go:31] will retry after 184.647945ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.732373 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:28.732422 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.732443 1653676 retry.go:31] will retry after 304.201438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.908717 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:28.958881 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:28.958925 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.958945 1653676 retry.go:31] will retry after 476.117899ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.037179 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:29.088413 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:29.088468 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.088491 1653676 retry.go:31] will retry after 197.264107ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.129639 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:29.129716 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:29.130032 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:29.286304 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:29.334473 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:29.337029 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.337065 1653676 retry.go:31] will retry after 823.238005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.435237 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:29.482679 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:29.485403 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.485436 1653676 retry.go:31] will retry after 800.644745ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.629726 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:29.629799 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:29.630104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:30.128837 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:30.128917 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:30.129285 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:30.161434 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:30.213167 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.213231 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.213275 1653676 retry.go:31] will retry after 656.353253ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.286342 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:30.334470 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.336981 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.337012 1653676 retry.go:31] will retry after 508.253019ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.629489 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:30.629586 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:30.629950 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:30.630017 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:30.845486 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:30.869953 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:30.897779 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.897836 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.897862 1653676 retry.go:31] will retry after 1.094600532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.922225 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.922291 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.922314 1653676 retry.go:31] will retry after 805.303636ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:31.129681 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:31.129760 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:31.130110 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:31.628691 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:31.628775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:31.629122 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:31.728325 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:31.779677 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:31.779728 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:31.779748 1653676 retry.go:31] will retry after 2.236258385s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:31.993064 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:32.044458 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:32.044511 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:32.044552 1653676 retry.go:31] will retry after 1.503507165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:32.129706 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:32.129775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:32.130079 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:32.629732 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:32.629813 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:32.630171 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:32.630256 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:33.128768 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:33.128853 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:33.129210 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:33.548844 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:33.599998 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:33.600058 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:33.600081 1653676 retry.go:31] will retry after 1.994543648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:33.629251 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:33.629339 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:33.629634 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:34.017206 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:34.068508 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:34.068573 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:34.068597 1653676 retry.go:31] will retry after 3.823609715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:34.128678 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:34.128751 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:34.129067 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:34.629688 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:34.629764 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:34.630098 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:35.129721 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:35.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:35.130115 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:35.130189 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:35.595749 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:35.629120 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:35.629209 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:35.629582 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:35.645323 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:35.647845 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:35.647880 1653676 retry.go:31] will retry after 3.559085278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:36.129701 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:36.129780 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:36.130117 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:36.628869 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:36.628953 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:36.629336 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:37.129085 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:37.129171 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:37.129515 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:37.629335 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:37.629411 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:37.629704 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:37.629765 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:37.893118 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:37.941760 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:37.944423 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:37.944452 1653676 retry.go:31] will retry after 4.996473933s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:38.128782 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:38.128878 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:38.129260 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:38.628699 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:38.628786 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:38.629112 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:39.128699 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:39.128786 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:39.129139 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:39.207320 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:39.257569 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:39.257615 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:39.257640 1653676 retry.go:31] will retry after 8.124151658s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:39.629122 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:39.629208 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:39.629537 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:40.129218 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:40.129325 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:40.129628 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:40.129693 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:40.629297 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:40.629368 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:40.629673 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:41.129406 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:41.129495 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:41.129887 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:41.629498 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:41.629579 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:41.629928 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:42.129549 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:42.129645 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:42.130002 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:42.130063 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:42.629629 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:42.629709 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:42.630062 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:42.941490 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:42.990741 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:42.993232 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:42.993279 1653676 retry.go:31] will retry after 4.825851231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:43.129602 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:43.129690 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:43.130065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:43.628834 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:43.628909 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:43.629270 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:44.129025 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:44.129120 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:44.129526 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:44.629359 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:44.629431 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:44.629737 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:44.629803 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:45.129549 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:45.129627 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:45.129961 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:45.628704 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:45.628789 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:45.629130 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:46.128858 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:46.128936 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:46.129295 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:46.629013 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:46.629096 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:46.629444 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:47.129179 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:47.129266 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:47.129609 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:47.129674 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:47.381978 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:47.430195 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:47.433093 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:47.433123 1653676 retry.go:31] will retry after 10.012002454s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:47.629500 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:47.629573 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:47.629910 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:47.820313 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:47.870430 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:47.870476 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:47.870493 1653676 retry.go:31] will retry after 10.075489679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:48.128804 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:48.128895 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:48.129267 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:48.629030 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:48.629141 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:48.629503 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:49.129320 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:49.129409 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:49.129785 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:49.129864 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:49.629600 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:49.629674 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:49.629992 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:50.128745 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:50.128835 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:50.129191 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:50.628937 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:50.629015 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:50.629395 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:51.128731 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:51.128818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:51.129169 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:51.628936 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:51.629009 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:51.629384 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:51.629473 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:52.129137 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:52.129221 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:52.129575 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:52.629361 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:52.629431 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:52.629735 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:53.129540 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:53.129620 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:53.129949 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:53.628671 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:53.628747 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:53.629071 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:54.128801 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:54.128899 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:54.129261 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:54.129334 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:54.629005 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:54.629105 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:54.629481 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:55.129371 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:55.129447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:55.129804 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:55.629597 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:55.629674 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:55.630007 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:56.128707 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:56.128802 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:57.445382 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:57.946208 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:56:06.129570 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10000
	W0804 08:56:06.129644 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:56:06.129694 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:06.129736 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:16.130254 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10000
	W0804 08:56:16.130338 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:56:16.130408 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:16.130480 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:16.262782 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=132
	I0804 08:56:17.263910 1653676 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8441/api/v1/nodes/functional-699837"
	I0804 08:56:17.264149 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:17.264472 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:17.264610 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:17.264716 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:17.264973 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:17.267370 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38248->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267420 1653676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (19.822003727s)
	W0804 08:56:17.267450 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38248->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267470 1653676 retry.go:31] will retry after 18.146841122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38248->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267784 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38252->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267815 1653676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (19.321577292s)
	W0804 08:56:17.267836 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38252->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267852 1653676 retry.go:31] will retry after 19.077492147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38252->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.629331 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:17.629410 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:17.629777 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:18.129400 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:18.129489 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:18.129796 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:18.629536 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:18.629618 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:18.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:18.630021 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:19.129659 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:19.129746 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:19.130112 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:19.628758 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:19.628835 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:19.629178 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:20.128732 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:20.128806 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:20.129156 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:20.628674 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:20.628755 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:20.629081 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:21.128792 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:21.128867 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:21.129234 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:21.129324 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:21.629020 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:21.629101 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:21.629489 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:22.129299 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:22.129389 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:22.129751 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:22.629584 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:22.629664 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:22.629996 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:23.128722 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:23.128828 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:23.129192 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:23.628966 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:23.629055 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:23.629374 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:23.629437 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:24.129128 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:24.129225 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:24.129600 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:24.629381 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:24.629467 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:24.629838 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:25.129635 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:25.129755 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:25.130108 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:25.628815 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:25.628905 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:25.629282 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:26.128941 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:26.129024 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:26.129386 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:26.129469 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:26.629153 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:26.629266 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:26.629626 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:27.129444 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:27.129526 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:27.129867 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:27.629658 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:27.629737 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:27.630140 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:28.128857 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:28.128947 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:28.129307 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:28.629734 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:28.629837 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:28.630240 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:28.630338 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:29.129055 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:29.129168 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:29.129536 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:29.629363 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:29.629443 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:29.629791 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:30.129636 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:30.129710 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:30.130048 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:30.628774 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:30.628849 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:30.629212 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:31.128887 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:31.128984 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:31.129358 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:31.129426 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:31.629089 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:31.629164 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:31.629502 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:32.129335 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:32.129440 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:32.129852 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:32.629638 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:32.629720 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:32.630056 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:33.128794 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:33.128882 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:33.129261 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:33.628999 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:33.629072 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:33.629432 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:33.629497 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:34.129184 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:34.129308 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:34.129684 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:34.629474 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:34.629546 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:34.629872 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:35.129661 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:35.129748 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:35.130119 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:35.414447 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:56:35.463330 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:35.466231 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:35.466267 1653676 retry.go:31] will retry after 13.873476046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:35.629483 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:35.629558 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:35.629897 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:35.629960 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:36.129639 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:36.129713 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:36.130046 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:36.346375 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:56:36.394439 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:36.396962 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:36.396996 1653676 retry.go:31] will retry after 20.764306788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:36.629373 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:36.629461 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:36.629797 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:37.129619 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:37.129700 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:37.130049 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:37.628786 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:37.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:37.629214 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:38.129001 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:38.129075 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:38.129435 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:38.129504 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:38.629094 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:38.629186 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:38.629537 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:39.129329 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:39.129403 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:39.129733 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:39.629535 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:39.629607 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:39.629940 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:40.129719 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:40.129801 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:40.130145 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:40.130216 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:40.628884 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:40.628964 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:40.629317 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:41.128956 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:41.129035 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:41.129355 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:41.629076 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:41.629150 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:41.629485 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:42.129286 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:42.129362 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:42.129691 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:42.629456 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:42.629537 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:42.629869 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:42.629938 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:43.129673 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:43.129756 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:43.130100 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:43.628809 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:43.628889 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:43.629208 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:44.128939 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:44.129019 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:44.129378 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:44.629097 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:44.629182 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:44.629521 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:45.129310 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:45.129387 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:45.129760 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:45.129832 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:45.629562 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:45.629633 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:45.630029 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:46.128691 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:46.128772 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:46.129112 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:46.628845 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:46.628920 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:46.629291 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:47.129029 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:47.129126 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:47.129500 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:47.629337 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:47.629420 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:47.629741 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:47.629802 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:48.129626 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:48.129722 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:48.130077 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:48.628742 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:48.628836 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:48.629189 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:49.128743 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:49.128827 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:49.129185 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:49.340493 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:56:49.391267 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:49.391322 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:49.391344 1653676 retry.go:31] will retry after 22.530122873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:49.629701 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:49.629775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:49.630094 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:49.630167 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:50.128781 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:50.128853 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:50.129231 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:50.628838 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:50.628912 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:50.629276 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:51.129234 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:51.129318 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:51.129637 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:51.629350 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:51.629441 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:51.629759 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:52.129549 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:52.129656 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:52.129995 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:52.130058 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:52.628710 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:52.628778 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:52.629090 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:53.128873 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:53.128994 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:53.129417 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:53.629155 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:53.629225 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:53.629551 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:54.129336 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:54.129409 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:54.129789 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:54.629582 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:54.629657 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:54.629978 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:54.630042 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:55.128737 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:55.128827 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:55.129209 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:55.629562 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:55.629630 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:55.629995 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:56.129718 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:56.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:56.130127 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:56.628839 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:56.628957 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:56.629326 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:57.129049 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:57.129165 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:57.129545 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:57.129614 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:57.161690 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:56:57.212094 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:57.212172 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:57.212321 1653676 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 08:56:57.629703 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:57.629786 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:57.630137 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:58.128910 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:58.128986 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:58.129348 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:58.629128 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:58.629212 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:58.629557 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:59.129348 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:59.129423 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:59.129768 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:59.129831 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:59.629552 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:59.629630 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:59.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:00.128668 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:00.128748 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:00.129104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:00.628883 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:00.628972 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:00.629344 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:01.128990 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:01.129091 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:01.129447 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:01.629187 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:01.629284 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:01.629625 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:01.629697 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:02.129438 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:02.129511 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:02.129847 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:02.629620 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:02.629714 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:02.630041 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:03.128760 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:03.128862 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:03.129196 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:03.628968 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:03.629065 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:03.629415 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:04.129145 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:04.129220 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:04.129570 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:04.129643 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:04.629351 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:04.629445 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:04.629746 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:05.129583 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:05.129661 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:05.129993 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:05.628708 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:05.628794 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:05.629079 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:06.128832 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:06.128925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:06.129318 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:06.629043 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:06.629138 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:06.629480 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:06.629558 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:07.129326 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:07.129425 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:07.129785 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:07.629601 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:07.629694 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:07.630065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:08.128801 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:08.128909 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:08.129315 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:08.629044 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:08.629145 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:08.629528 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:08.629593 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:09.129358 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:09.129453 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:09.129910 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:09.629675 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:09.629754 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:09.630073 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:10.128808 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:10.128885 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:10.129234 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:10.628993 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:10.629089 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:10.629434 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:11.129231 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:11.129347 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:11.129707 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:11.129770 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:11.629527 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:11.629607 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:11.629894 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:11.922305 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:57:11.970691 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:57:11.973096 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:57:11.973263 1653676 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 08:57:11.975142 1653676 out.go:177] * Enabled addons: 
	I0804 08:57:11.976503 1653676 addons.go:514] duration metric: took 1m43.454009966s for enable addons: enabled=[]
	I0804 08:57:12.129480 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:12.129579 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:12.129915 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:12.629535 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:12.629640 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:12.629960 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:13.129603 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:13.129676 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:13.130018 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:13.130084 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:13.629651 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:13.629730 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:13.630028 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:14.129674 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:14.129818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:14.130187 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:14.628738 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:14.628810 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:14.629106 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:15.128681 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:15.128756 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:15.129116 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:15.628700 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:15.628781 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:15.629089 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:15.629155 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:16.128845 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:16.128921 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:16.129302 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:16.628840 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:16.628918 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:16.629233 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:17.128809 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:17.128893 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:17.129257 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:17.628792 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:17.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:17.629202 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:17.629293 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:18.128759 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:18.128847 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:18.129200 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:18.629041 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:18.629121 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:18.629468 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:19.129039 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:19.129112 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:19.129489 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:19.629035 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:19.629105 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:19.629466 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:19.629532 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:20.129056 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:20.129136 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:20.129527 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:20.629075 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:20.629154 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:20.629482 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:21.129294 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:21.129367 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:21.129717 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:21.629359 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:21.629463 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:21.629764 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:21.629831 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:22.129365 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:22.129439 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:22.129781 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:22.629426 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:22.629501 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:22.629789 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:23.129450 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:23.129535 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:23.129870 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:23.629332 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:23.629418 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:23.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:24.128868 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:24.128960 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:24.129333 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:24.129416 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:24.628863 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:24.628939 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:24.629295 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:25.128809 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:25.128887 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:25.129269 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:25.629006 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:25.629081 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:25.629396 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:26.129192 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:26.129303 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:26.129672 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:26.129741 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:26.629536 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:26.629611 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:26.629914 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:27.129705 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:27.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:27.130156 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:27.628879 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:27.628961 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:27.629280 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:28.129023 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:28.129114 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:28.129510 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:28.629296 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:28.629387 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:28.629697 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:28.629765 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:29.129519 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:29.129613 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:29.129968 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:29.628696 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:29.628770 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:29.629059 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:30.128786 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:30.128880 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:30.129235 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:30.628979 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:30.629054 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:30.629304 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:31.129276 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:31.129363 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:31.129719 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:31.129793 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:31.629528 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:31.629615 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:31.629920 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:32.128690 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:32.128765 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:32.129098 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:32.628838 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:32.628956 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:32.629288 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:33.129003 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:33.129091 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:33.129461 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:33.629193 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:33.629295 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:33.629610 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:33.629682 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:34.129449 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:34.129539 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:34.129898 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:34.629687 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:34.629766 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:34.630068 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:35.128782 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:35.128868 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:35.129222 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:35.628979 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:35.629051 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:35.629387 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:36.129189 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:36.129297 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:36.129671 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:36.129763 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:36.629508 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:36.629584 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:36.629873 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:37.129696 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:37.129776 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:37.130132 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:37.628857 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:37.628938 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:37.629221 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:38.128990 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:38.129078 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:38.129487 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:38.629184 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:38.629289 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:38.629594 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:38.629667 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:39.129364 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:39.129441 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:39.129810 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:39.629603 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:39.629674 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:39.629968 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:40.128718 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:40.128797 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:40.129178 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:40.628945 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:40.629021 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:40.629364 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:41.129136 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:41.129253 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:41.129612 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:41.129682 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:41.629452 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:41.629530 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:41.629831 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:42.129618 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:42.129707 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:42.130079 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:42.628760 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:42.628838 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:42.629155 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:43.128868 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:43.128970 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:43.129365 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:43.629090 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:43.629163 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:43.629503 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:43.629565 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:44.129335 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:44.129433 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:44.129785 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:44.629577 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:44.629649 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:44.629949 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:45.128664 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:45.128759 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:45.129131 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:45.628854 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:45.628932 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:45.629229 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:46.128970 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:46.129047 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:46.129442 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:46.129517 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:46.629268 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:46.629344 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:46.629668 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:47.129457 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:47.129529 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:47.129867 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:47.629659 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:47.629734 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:47.630045 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:48.128764 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:48.128839 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:48.129183 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:48.628996 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:48.629085 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:48.629417 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:48.629493 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:49.129179 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:49.129288 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:49.129668 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:49.629441 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:49.629513 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:49.629806 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:50.129603 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:50.129678 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:50.130019 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:50.628730 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:50.628803 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:50.629119 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:51.128835 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:51.128916 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:51.129293 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:51.129364 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:51.629058 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:51.629136 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:51.629474 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:52.129201 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:52.129298 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:52.129723 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:52.629568 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:52.629654 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:52.630018 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:53.128764 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:53.128844 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:53.129204 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:53.628946 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:53.629019 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:53.629368 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:53.629442 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:54.129146 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:54.129225 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:54.129608 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:54.629341 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:54.629417 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:54.629719 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:55.129545 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:55.129619 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:55.129967 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:55.628701 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:55.628776 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:55.629095 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:56.128809 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:56.128887 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:56.129279 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:56.129347 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:56.629019 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:56.629096 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:56.629435 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:57.129166 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:57.129283 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:57.129655 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:57.629456 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:57.629534 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:57.629859 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:58.129657 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:58.129755 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:58.130109 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:58.130182 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:58.628778 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:58.628892 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:58.629216 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:59.128942 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:59.129046 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:59.129427 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:59.629154 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:59.629257 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:59.629579 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:00.129357 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:00.129459 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:00.129797 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:00.629587 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:00.629677 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:00.630022 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:00.630087 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:01.128755 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:01.128831 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:01.129179 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:01.628959 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:01.629054 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:01.629420 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:02.129182 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:02.129295 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:02.129668 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:02.629476 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:02.629572 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:02.629862 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:03.129679 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:03.129759 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:03.130099 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:03.130172 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:03.628846 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:03.628948 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:03.629308 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:04.129055 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:04.129134 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:04.129501 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:04.629285 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:04.629371 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:04.629678 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:05.129485 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:05.129556 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:05.129895 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:05.629689 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:05.629775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:05.630092 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:05.630166 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:06.128794 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:06.128884 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:06.129262 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:06.628981 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:06.629094 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:06.629442 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:07.129153 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:07.129236 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:07.129612 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:07.629373 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:07.629460 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:07.629767 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:08.129560 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:08.129642 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:08.129999 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:08.130067 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:08.628667 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:08.628761 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:08.629105 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:09.128826 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:09.128902 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:09.129208 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:09.628951 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:09.629038 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:09.629355 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:10.129067 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:10.129144 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:10.129526 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:10.629346 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:10.629440 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:10.629755 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:10.629825 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:11.129536 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:11.129607 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:11.129931 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:11.628656 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:11.628740 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:11.629041 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:12.128773 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:12.128847 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:12.129188 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:12.628944 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:12.629039 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:12.629370 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:13.129112 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:13.129185 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:13.129528 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:13.129601 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:13.628854 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:13.628929 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:13.629262 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:14.129022 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:14.129107 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:14.129456 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:14.629179 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:14.629262 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:14.629560 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:15.129358 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:15.129438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:15.129768 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:15.129842 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:15.629588 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:15.629663 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:15.629993 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:16.128722 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:16.128807 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:16.129155 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:16.628888 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:16.628968 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:16.629289 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:17.128871 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:17.128958 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:17.129331 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:17.629089 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:17.629163 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:17.629498 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:17.629579 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:18.129331 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:18.129413 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:18.129748 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:18.629352 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:18.629431 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:18.629731 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:19.129531 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:19.129601 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:19.129926 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:19.629715 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:19.629793 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:19.630096 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:19.630165 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:20.128817 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:20.128892 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:20.129221 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:20.628986 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:20.629062 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:20.629379 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:21.129140 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:21.129256 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:21.129611 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:21.629346 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:21.629422 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:21.629705 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:22.129503 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:22.129592 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:22.129936 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:22.130013 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:22.628702 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:22.628771 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:22.629065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:23.128773 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:23.128856 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:23.129193 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:23.628915 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:23.629017 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:23.629329 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:24.129041 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:24.129130 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:24.129485 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:24.629265 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:24.629368 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:24.629656 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:24.629721 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:25.129446 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:25.129542 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:25.129838 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:25.629614 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:25.629692 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:25.630005 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:26.128734 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:26.128822 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:26.129143 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:26.628855 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:26.628945 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:26.629295 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:27.129001 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:27.129078 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:27.129430 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:27.129497 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:27.629154 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:27.629226 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:27.629562 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:28.129344 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:28.129447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:28.129769 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:28.629456 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:28.629542 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:28.629856 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:29.129664 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:29.129750 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:29.130110 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:29.130200 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:29.628750 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:29.628825 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:29.629116 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:30.128860 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:30.128943 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:30.129300 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:30.629025 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:30.629107 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:30.629409 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:31.129309 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:31.129383 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:31.129732 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:31.629506 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:31.629578 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:31.629869 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:31.629930 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:32.129669 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:32.129745 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:32.130096 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:32.628810 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:32.628890 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:32.629161 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:33.128895 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:33.128972 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:33.129352 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:33.629078 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:33.629161 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:33.629537 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:34.129351 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:34.129430 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:34.129807 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:34.129887 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:34.629642 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:34.629714 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:34.630028 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:35.128785 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:35.128867 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:35.129207 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:35.628963 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:35.629038 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:35.629350 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:36.129133 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:36.129206 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:36.129495 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:36.629057 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:36.629152 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:36.629476 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:36.629541 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:37.129344 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:37.129435 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:37.129779 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:37.629589 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:37.629665 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:37.629987 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:38.128723 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:38.128818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:38.129170 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:38.628949 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:38.629043 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:38.629367 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:39.129078 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:39.129177 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:39.129555 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:39.129622 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:39.629381 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:39.629467 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:39.629800 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:40.129606 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:40.129705 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:40.130062 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:40.628786 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:40.628889 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:40.629233 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:41.129024 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:41.129100 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:41.129462 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:41.629280 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:41.629379 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:41.629701 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:41.629762 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:42.129521 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:42.129597 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:42.129950 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:42.628667 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:42.628756 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:42.629073 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:43.128819 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:43.128897 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:43.129279 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:43.629033 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:43.629148 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:43.629489 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:44.129324 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:44.129407 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:44.129750 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:44.129816 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:44.629574 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:44.629658 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:44.629972 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:45.128703 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:45.128778 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:45.129125 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:45.628842 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:45.628933 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:45.629252 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:46.128948 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:46.129033 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:46.129380 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:46.629108 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:46.629185 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:46.629520 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:46.629580 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:47.129340 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:47.129419 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:47.129767 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:47.629563 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:47.629638 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:47.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:48.128670 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:48.128751 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:48.129104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:48.629702 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:48.629776 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:48.630085 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:48.630146 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:49.128823 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:49.128899 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:49.129229 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:49.628981 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:49.629065 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:49.629392 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:50.129122 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:50.129198 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:50.129554 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:50.629352 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:50.629447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:50.629788 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:51.129551 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:51.129636 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:51.129966 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:51.130030 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:51.628723 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:51.628822 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:51.629134 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:52.128861 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:52.128966 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:52.129334 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:52.629047 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:52.629124 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:52.629436 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:53.129166 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:53.129271 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:53.129578 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:53.629347 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:53.629425 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:53.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:53.629789 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:54.129531 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:54.129608 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:54.130022 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:54.628732 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:54.628807 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:54.629107 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:55.128818 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:55.128901 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:55.129281 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:55.629003 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:55.629084 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:55.629411 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:56.129310 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:56.129399 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:56.129752 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:56.129817 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:56.629559 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:56.629638 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:56.629927 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:57.129729 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:57.129818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:57.130192 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:57.628939 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:57.629019 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:57.629349 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:58.129065 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:58.129186 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:58.129616 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:58.629318 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:58.629398 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:58.629699 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:58.629757 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:59.129513 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:59.129603 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:59.129965 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:59.628703 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:59.628781 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:59.629083 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:00.128805 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:00.128896 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:00.129279 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:00.629019 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:00.629098 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:00.629464 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:01.129270 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:01.129348 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:01.129717 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:01.129794 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:01.629537 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:01.629608 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:01.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:02.128689 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:02.128769 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:02.129142 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:02.628902 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:02.628987 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:02.629315 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:03.129038 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:03.129117 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:03.129496 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:03.629371 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:03.629457 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:03.629773 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:03.629837 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:04.129591 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:04.129684 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:14.133399 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10003
	W0804 08:59:14.133474 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:59:14.133535 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:14.133571 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:24.134577 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10000
	W0804 08:59:24.134670 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:59:24.134743 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:24.134791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:24.447100 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=312
	I0804 08:59:25.448003 1653676 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8441/api/v1/nodes/functional-699837"
	I0804 08:59:25.448109 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:25.448371 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:25.448473 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:25.448503 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:25.448708 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:25.629198 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:25.629320 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:25.629693 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:26.129362 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:26.129438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:26.129786 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:26.629562 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:26.629634 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:26.629913 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:26.629981 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:27.129710 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:27.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:27.130145 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:27.628843 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:27.628915 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:27.629211 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:28.128958 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:28.129049 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:28.129414 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:28.629057 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:28.629131 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:28.629437 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:29.129142 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:29.129215 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:29.129570 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:29.129634 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:29.629351 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:29.629434 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:29.629732 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:30.129550 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:30.129627 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:30.129981 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:30.628711 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:30.628785 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:30.629088 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:31.128761 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:31.128837 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:31.129194 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:31.628935 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:31.629013 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:31.629357 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:31.629423 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:32.129102 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:32.129207 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:32.129598 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:32.629343 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:32.629412 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:32.629682 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:33.129483 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:33.129571 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:33.129937 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:33.628685 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:33.628761 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:33.629071 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:34.128794 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:34.128880 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:34.129196 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:34.129292 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:34.628955 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:34.629026 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:34.629332 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:35.129092 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:35.129172 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:35.129540 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:35.629393 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:35.629466 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:35.629788 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:36.129551 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:36.129629 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:36.129981 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:36.130049 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:36.628714 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:36.628796 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:36.629109 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:37.128919 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:37.128993 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:37.129345 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:37.629059 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:37.629147 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:37.629463 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:38.129234 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:38.129326 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:38.129664 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:38.629351 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:38.629432 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:38.629732 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:38.629805 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:39.129576 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:39.129650 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:39.129997 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:39.628740 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:39.628825 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:39.629123 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:40.128863 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:40.128946 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:40.129324 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:40.629061 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:40.629132 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:40.629464 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:41.129329 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:41.129415 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:41.129770 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:41.129836 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:41.629564 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:41.629638 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:41.629926 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:42.129712 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:42.129803 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:42.130147 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:42.628855 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:42.628932 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:42.629230 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:43.128970 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:43.129055 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:43.129407 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:43.629110 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:43.629193 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:43.629549 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:43.629613 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:44.129360 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:44.129442 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:44.129809 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:44.629604 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:44.629695 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:44.629982 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:45.128765 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:45.128844 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:45.129221 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:45.628969 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:45.629067 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:45.629365 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:46.129219 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:46.129334 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:46.129701 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:46.129778 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:46.629522 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:46.629594 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:46.629887 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:47.129668 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:47.129774 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:47.130135 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:47.628848 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:47.628924 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:47.629222 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:48.128974 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:48.129074 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:48.129460 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:48.629189 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:48.629275 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:48.629575 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:48.629637 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:49.129365 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:49.129460 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:49.129826 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:49.629589 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:49.629663 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:49.629948 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:50.128684 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:50.128784 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:50.129153 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:50.628866 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:50.628940 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:50.629236 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:51.128964 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:51.129053 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:51.129443 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:51.129520 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:51.629181 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:51.629285 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:51.629597 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:52.129363 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:52.129439 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:52.129782 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:52.629564 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:52.629637 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:52.629921 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:53.128676 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:53.128760 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:53.129117 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:53.628840 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:53.628925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:53.629216 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:53.629319 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:54.129011 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:54.129119 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:54.129458 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:54.629169 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:54.629255 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:54.629563 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:55.129370 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:55.129456 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:55.129803 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:55.629586 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:55.629656 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:55.629948 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:55.630021 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:56.129716 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:56.129807 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:56.130158 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:56.628872 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:56.628960 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:56.629280 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:57.129030 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:57.129134 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:57.129533 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:57.629322 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:57.629394 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:57.629681 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:58.129475 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:58.129571 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:58.129969 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:58.130041 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:58.629691 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:58.629768 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:58.630065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:59.128782 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:59.128877 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:59.129234 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:59.628979 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:59.629051 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:59.629387 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:00.129109 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:00.129205 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:00.129657 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:00.629456 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:00.629529 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:00.629872 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:00.629939 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:01.129658 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:01.129735 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:01.130048 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:01.628777 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:01.628856 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:01.629190 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:02.128935 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:02.129010 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:02.129319 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:02.628797 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:02.628877 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:02.629137 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:03.128821 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:03.128896 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:03.129167 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:03.129224 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:03.628891 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:03.628974 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:03.629299 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:04.129012 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:04.129096 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:04.129462 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:04.629177 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:04.629276 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:04.629597 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:05.129034 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:05.129129 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:05.129588 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:05.129664 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:05.629416 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:05.629491 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:05.629807 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:06.129708 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:06.129798 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:06.130177 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:06.628914 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:06.628986 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:06.629309 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:07.129052 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:07.129152 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:07.129545 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:07.629359 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:07.629447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:07.629774 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:07.629843 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:08.129619 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:08.129703 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:08.130076 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:08.628794 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:08.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:08.629209 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:09.128966 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:09.129044 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:09.129548 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:09.629398 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:09.629478 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:09.629790 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:10.129602 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:10.129686 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:10.130062 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:10.130134 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:10.628810 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:10.628888 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:10.629214 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:11.128747 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:11.128824 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:11.129152 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:11.628878 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:11.628954 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:11.629286 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:12.129028 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:12.129106 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:12.129473 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:12.629262 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:12.629338 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:12.629618 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:12.629689 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:13.129417 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:13.129501 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:13.129842 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:13.629621 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:13.629693 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:13.629988 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:14.128745 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:14.128832 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:14.129178 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:14.628945 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:14.629017 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:14.629397 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:15.129144 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:15.129234 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:15.129617 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:15.129699 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:15.629451 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:15.629537 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:15.629859 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:16.129648 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:16.129725 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:16.130080 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:16.628842 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:16.628922 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:16.629262 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:17.128979 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:17.129061 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:17.129404 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:17.629119 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:17.629192 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:17.629516 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:17.629592 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:18.129336 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:18.129414 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:18.129755 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:18.629486 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:18.629564 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:18.629881 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:19.129669 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:19.129760 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:19.130101 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:19.628816 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:19.628890 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:19.629175 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:20.128910 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:20.128984 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:20.129330 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:20.129401 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:20.629078 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:20.629168 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:20.629501 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:21.129330 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:21.129424 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:21.129762 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:21.629541 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:21.629617 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:21.629961 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:22.128702 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:22.128777 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:22.129131 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:22.628835 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:22.628922 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:22.629266 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:22.629330 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:23.128997 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:23.129087 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:23.129464 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:23.629182 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:23.629286 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:23.629610 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:24.129357 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:24.129433 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:24.129789 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:24.629580 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:24.629654 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:24.630004 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:24.630071 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:25.128772 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:25.128875 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:25.129222 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:25.628964 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:25.629038 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:25.629409 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:26.129166 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:26.129260 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:26.129614 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:26.629352 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:26.629430 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:26.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:27.129507 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:27.129584 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:27.129930 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:27.129995 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:27.628677 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:27.628763 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:27.629122 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:28.128831 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:28.128925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:28.129213 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:28.629034 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:28.629122 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:28.629430 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:29.129177 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:29.129276 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:29.129670 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:29.629478 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:29.629549 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:29.629842 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:29.629908 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:30.129649 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:30.129723 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:30.130078 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:30.628813 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:30.628886 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:30.629190 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:31.128911 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:31.128986 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:31.129333 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:31.629040 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:31.629132 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:31.629470 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:32.129197 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:32.129290 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:32.129685 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:32.129763 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:32.629496 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:32.629568 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:32.629869 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:33.129687 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:33.129771 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:33.130108 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:33.628818 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:33.628897 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:33.629202 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:34.128946 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:34.129020 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:34.129415 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:34.629147 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:34.629219 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:34.629558 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:34.629628 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:35.129369 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:35.129455 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:35.129805 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:35.629601 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:35.629676 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:35.629982 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:36.128679 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:36.128768 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:36.129121 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:36.628838 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:36.628914 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:36.629211 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:37.128955 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:37.129054 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:37.129433 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:37.129502 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:37.629160 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:37.629260 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:37.629562 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:38.129342 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:38.129438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:38.129787 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:38.629253 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:38.629328 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:38.629641 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:39.129419 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:39.129511 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:39.129853 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:39.129927 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:39.629656 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:39.629726 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:39.630015 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:40.128736 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:40.128824 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:40.129162 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:40.628753 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:40.628832 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:40.629116 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:41.128932 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:41.129010 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:41.129303 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:41.629089 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:41.629196 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:41.629513 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:41.629580 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:42.129349 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:42.129434 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:42.129769 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:42.629554 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:42.629629 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:42.629873 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:43.129642 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:43.129720 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:43.130046 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:43.628744 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:43.628817 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:43.629115 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:44.128831 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:44.128907 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:44.129297 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:44.129364 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:44.629025 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:44.629100 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:44.629418 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:45.129142 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:45.129218 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:45.129572 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:45.629352 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:45.629425 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:45.629726 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:46.129360 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:46.129445 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:46.129788 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:46.129856 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:46.629588 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:46.629667 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:46.629948 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:47.128666 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:47.128744 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:47.129078 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:47.628771 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:47.628847 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:47.629196 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:48.128923 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:48.129000 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:48.129363 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:48.629072 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:48.629151 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:48.629471 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:48.629534 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:49.129296 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:49.129375 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:49.129725 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:49.629524 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:49.629595 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:49.629882 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:50.129670 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:50.129763 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:50.130141 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:50.628871 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:50.628953 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:50.629283 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:51.129015 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:51.129090 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:51.129476 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:51.129545 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:51.629293 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:51.629378 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:51.629669 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:52.129450 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:52.129528 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:52.129859 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:52.629654 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:52.629726 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:52.630058 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:53.128778 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:53.128856 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:53.129197 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:53.628936 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:53.629015 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:53.629344 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:53.629420 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:54.129104 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:54.129196 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:54.129579 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:54.629357 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:54.629426 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:54.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:55.129436 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:55.129536 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:55.129882 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:55.629646 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:55.629719 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:55.630035 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:55.630107 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:56.128773 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:56.128845 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:56.129181 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:56.628950 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:56.629034 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:56.629378 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:57.129105 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:57.129181 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:57.129559 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:57.629369 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:57.629438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:57.629742 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:58.129515 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:58.129595 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:58.129950 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:58.130034 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:58.628750 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:58.628830 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:58.629147 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:59.128851 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:59.128928 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:59.129309 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:59.629042 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:59.629121 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:59.629455 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:00.129167 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:00.129270 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:00.129632 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:00.629423 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:00.629498 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:00.629793 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:00.629863 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:01.129591 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:01.129676 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:01.130023 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:01.628726 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:01.628804 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:01.629104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:02.128841 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:02.128936 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:02.129299 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:02.629029 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:02.629126 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:02.629455 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:03.129199 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:03.129305 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:03.129646 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:03.129706 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:03.629451 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:03.629523 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:03.629841 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:04.129677 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:04.129766 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:04.130114 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:04.628842 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:04.628925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:04.629305 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:05.129074 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:05.129179 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:05.129561 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:05.629356 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:05.629434 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:05.629760 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:05.629824 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:06.129613 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:06.129693 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:06.130038 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:06.628772 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:06.628866 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:06.629198 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:07.128967 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:07.129056 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:07.129446 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:07.629172 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:07.629271 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:07.629622 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:08.129431 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:08.129524 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:08.129883 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:08.129948 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:08.629670 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:08.629754 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:08.630071 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:09.128820 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:09.128899 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:09.129287 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:09.629017 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:09.629101 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:09.629445 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:10.129193 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:10.129297 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:10.129649 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:10.629427 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:10.629501 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:10.629814 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:10.629890 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:11.129612 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:11.129692 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:11.129995 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:11.628703 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:11.628780 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:11.629047 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:12.128784 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:12.128867 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:12.129223 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:12.628955 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:12.629067 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:12.629416 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:13.129129 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:13.129206 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:13.129596 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:13.129670 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:13.629350 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:13.629433 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:13.629735 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:14.129533 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:14.129618 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:14.129952 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:14.628687 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:14.628782 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:14.629096 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:15.128811 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:15.128888 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:15.129232 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:15.628958 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:15.629043 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:15.629372 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:15.629444 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:16.129169 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:16.129269 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:16.129671 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:16.629474 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:16.629546 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:16.629863 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:17.129648 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:17.129733 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:17.130077 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:17.628801 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:17.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:17.629169 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:18.128883 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:18.128963 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:18.129324 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:18.129398 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:18.629048 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:18.629135 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:18.629454 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:19.129179 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:19.129268 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:19.129621 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:19.629351 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:19.629424 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:19.629708 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:20.129508 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:20.129585 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:20.129925 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:20.129994 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:20.628667 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:20.628737 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:20.629038 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:21.128739 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:21.128822 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:21.129169 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:21.628882 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:21.628954 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:21.629266 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:22.128994 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:22.129070 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:22.129426 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:22.629135 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:22.629221 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:22.629538 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:22.629601 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:23.129384 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:23.129466 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:23.129808 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:23.629595 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:23.629669 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:23.629984 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:24.128733 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:24.128814 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:24.129170 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:24.629511 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:24.629630 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:24.630004 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:24.630069 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:25.128773 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:25.128859 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:25.129232 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:25.629077 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:25.629159 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:25.629492 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:26.129299 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:26.129377 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:26.129704 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:26.629492 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:26.629562 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:26.629872 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:27.129668 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:27.129753 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:27.130132 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:27.130203 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:27.628888 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:27.628961 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:27.629299 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:28.129030 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:28.129106 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:28.129492 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:28.629210 1653676 node_ready.go:38] duration metric: took 6m0.000644351s for node "functional-699837" to be "Ready" ...
	I0804 09:01:28.630996 1653676 out.go:201] 
	W0804 09:01:28.631963 1653676 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W0804 09:01:28.631975 1653676 out.go:270] * 
	* 
	W0804 09:01:28.633557 1653676 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 09:01:28.634655 1653676 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:678: failed to soft start minikube. args "out/minikube-linux-amd64 start -p functional-699837 --alsologtostderr -v=8": exit status 80
functional_test.go:680: soft start took 6m8.216341392s for "functional-699837" cluster.
I0804 09:01:28.944405 1582690 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-699837
helpers_test.go:235: (dbg) docker inspect functional-699837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	        "Created": "2025-08-04T08:46:45.45274172Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1645232,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T08:46:45.480784715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hosts",
	        "LogPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef-json.log",
	        "Name": "/functional-699837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-699837:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-699837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	                "LowerDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/merged",
	                "UpperDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/diff",
	                "WorkDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-699837",
	                "Source": "/var/lib/docker/volumes/functional-699837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-699837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-699837",
	                "name.minikube.sigs.k8s.io": "functional-699837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "28a81d3856c88da8c1d30d5c1cccd74ba2a899c3397b78caf0ac9da484142038",
	            "SandboxKey": "/var/run/docker/netns/28a81d3856c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-699837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:c5:9a:18:f2:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "763070d9e7bba0803db69bf71eb608d56921d0bfd4c71a1d39d0701f7372b87c",
	                    "EndpointID": "83493e8c17b59326d8c479c2c0d7a5ded2cae3362a881c1ce8347b3f751ead15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-699837",
	                        "c369b96e23d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837: exit status 2 (266.395585ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 logs -n 25
helpers_test.go:252: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-114794 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ ssh            │ functional-114794 ssh -- ls -la /mount-9p                                                                                                           │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ ssh            │ functional-114794 ssh sudo umount -f /mount-9p                                                                                                      │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ mount          │ -p functional-114794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2057398278/001:/mount3 --alsologtostderr -v=1                                  │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ mount          │ -p functional-114794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2057398278/001:/mount1 --alsologtostderr -v=1                                  │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ mount          │ -p functional-114794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2057398278/001:/mount2 --alsologtostderr -v=1                                  │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ ssh            │ functional-114794 ssh findmnt -T /mount1                                                                                                            │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ ssh            │ functional-114794 ssh findmnt -T /mount1                                                                                                            │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ ssh            │ functional-114794 ssh findmnt -T /mount2                                                                                                            │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ ssh            │ functional-114794 ssh findmnt -T /mount3                                                                                                            │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ mount          │ -p functional-114794 --kill=true                                                                                                                    │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ start          │ -p functional-114794 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker                                         │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ update-context │ functional-114794 update-context --alsologtostderr -v=2                                                                                             │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ update-context │ functional-114794 update-context --alsologtostderr -v=2                                                                                             │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ update-context │ functional-114794 update-context --alsologtostderr -v=2                                                                                             │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image          │ functional-114794 image ls --format short --alsologtostderr                                                                                         │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image          │ functional-114794 image ls --format yaml --alsologtostderr                                                                                          │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ ssh            │ functional-114794 ssh pgrep buildkitd                                                                                                               │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ image          │ functional-114794 image build -t localhost/my-image:functional-114794 testdata/build --alsologtostderr                                              │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image          │ functional-114794 image ls --format json --alsologtostderr                                                                                          │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image          │ functional-114794 image ls --format table --alsologtostderr                                                                                         │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image          │ functional-114794 image ls                                                                                                                          │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ delete         │ -p functional-114794                                                                                                                                │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ start          │ -p functional-699837 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ start          │ -p functional-699837 --alsologtostderr -v=8                                                                                                         │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 08:55 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 08:55:20
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 08:55:20.770600 1653676 out.go:345] Setting OutFile to fd 1 ...
	I0804 08:55:20.770872 1653676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:55:20.770883 1653676 out.go:358] Setting ErrFile to fd 2...
	I0804 08:55:20.770890 1653676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:55:20.771067 1653676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 08:55:20.771644 1653676 out.go:352] Setting JSON to false
	I0804 08:55:20.772653 1653676 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":149810,"bootTime":1754147911,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 08:55:20.772739 1653676 start.go:140] virtualization: kvm guest
	I0804 08:55:20.774597 1653676 out.go:177] * [functional-699837] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 08:55:20.775675 1653676 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 08:55:20.775678 1653676 notify.go:220] Checking for updates...
	I0804 08:55:20.776705 1653676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 08:55:20.777818 1653676 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:20.778845 1653676 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 08:55:20.779811 1653676 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 08:55:20.780885 1653676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 08:55:20.782127 1653676 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 08:55:20.782240 1653676 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 08:55:20.804704 1653676 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 08:55:20.804841 1653676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 08:55:20.850605 1653676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 08:55:20.841828701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 08:55:20.850698 1653676 docker.go:318] overlay module found
	I0804 08:55:20.852305 1653676 out.go:177] * Using the docker driver based on existing profile
	I0804 08:55:20.853166 1653676 start.go:304] selected driver: docker
	I0804 08:55:20.853179 1653676 start.go:918] validating driver "docker" against &{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 08:55:20.853275 1653676 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 08:55:20.853364 1653676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 08:55:20.899900 1653676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 08:55:20.891412564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 08:55:20.900590 1653676 cni.go:84] Creating CNI manager for ""
	I0804 08:55:20.900687 1653676 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 08:55:20.900743 1653676 start.go:348] cluster config:
	{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 08:55:20.902216 1653676 out.go:177] * Starting "functional-699837" primary control-plane node in "functional-699837" cluster
	I0804 08:55:20.903155 1653676 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 08:55:20.904009 1653676 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 08:55:20.904940 1653676 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 08:55:20.904978 1653676 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 08:55:20.904991 1653676 cache.go:56] Caching tarball of preloaded images
	I0804 08:55:20.905036 1653676 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 08:55:20.905069 1653676 preload.go:172] Found /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 08:55:20.905079 1653676 cache.go:59] Finished verifying existence of preloaded tar for v1.34.0-beta.0 on docker
	I0804 08:55:20.905203 1653676 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/config.json ...
	I0804 08:55:20.923511 1653676 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 08:55:20.923529 1653676 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 08:55:20.923544 1653676 cache.go:230] Successfully downloaded all kic artifacts
	I0804 08:55:20.923577 1653676 start.go:360] acquireMachinesLock for functional-699837: {Name:mkeddb8e244284f14cfc07327f464823de65cf67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 08:55:20.923631 1653676 start.go:364] duration metric: took 36.633µs to acquireMachinesLock for "functional-699837"
	I0804 08:55:20.923647 1653676 start.go:96] Skipping create...Using existing machine configuration
	I0804 08:55:20.923652 1653676 fix.go:54] fixHost starting: 
	I0804 08:55:20.923842 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:20.940410 1653676 fix.go:112] recreateIfNeeded on functional-699837: state=Running err=<nil>
	W0804 08:55:20.940440 1653676 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 08:55:20.942107 1653676 out.go:177] * Updating the running docker "functional-699837" container ...
	I0804 08:55:20.943161 1653676 machine.go:93] provisionDockerMachine start ...
	I0804 08:55:20.943249 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:20.959620 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:20.959871 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:20.959884 1653676 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 08:55:21.080396 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-699837
	
	I0804 08:55:21.080433 1653676 ubuntu.go:169] provisioning hostname "functional-699837"
	I0804 08:55:21.080500 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.097426 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.097649 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.097666 1653676 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-699837 && echo "functional-699837" | sudo tee /etc/hostname
	I0804 08:55:21.227825 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-699837
	
	I0804 08:55:21.227926 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.246066 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.246278 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.246294 1653676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-699837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-699837/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-699837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 08:55:21.373154 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 08:55:21.373185 1653676 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 08:55:21.373228 1653676 ubuntu.go:177] setting up certificates
	I0804 08:55:21.373273 1653676 provision.go:84] configureAuth start
	I0804 08:55:21.373335 1653676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-699837
	I0804 08:55:21.390471 1653676 provision.go:143] copyHostCerts
	I0804 08:55:21.390507 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 08:55:21.390548 1653676 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 08:55:21.390558 1653676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 08:55:21.390632 1653676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 08:55:21.390734 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 08:55:21.390760 1653676 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 08:55:21.390767 1653676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 08:55:21.390803 1653676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 08:55:21.390876 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 08:55:21.390902 1653676 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 08:55:21.390914 1653676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 08:55:21.390947 1653676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 08:55:21.391030 1653676 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.functional-699837 san=[127.0.0.1 192.168.49.2 functional-699837 localhost minikube]
	I0804 08:55:21.573518 1653676 provision.go:177] copyRemoteCerts
	I0804 08:55:21.573582 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 08:55:21.573618 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.591269 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:21.681513 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 08:55:21.681585 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 08:55:21.702708 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 08:55:21.702758 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 08:55:21.723583 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 08:55:21.723630 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 08:55:21.744569 1653676 provision.go:87] duration metric: took 371.27679ms to configureAuth
	I0804 08:55:21.744602 1653676 ubuntu.go:193] setting minikube options for container-runtime
	I0804 08:55:21.744799 1653676 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 08:55:21.744861 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.762017 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.762244 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.762255 1653676 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 08:55:21.889470 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 08:55:21.889494 1653676 ubuntu.go:71] root file system type: overlay
	I0804 08:55:21.889614 1653676 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 08:55:21.889686 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.906485 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.906734 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.906827 1653676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 08:55:22.043972 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 08:55:22.044042 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.061528 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:22.061801 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:22.061820 1653676 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 08:55:22.189999 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 08:55:22.190024 1653676 machine.go:96] duration metric: took 1.246850112s to provisionDockerMachine
	I0804 08:55:22.190035 1653676 start.go:293] postStartSetup for "functional-699837" (driver="docker")
	I0804 08:55:22.190046 1653676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 08:55:22.190105 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 08:55:22.190157 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.207121 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.297799 1653676 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 08:55:22.300559 1653676 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.5 LTS"
	I0804 08:55:22.300580 1653676 command_runner.go:130] > NAME="Ubuntu"
	I0804 08:55:22.300588 1653676 command_runner.go:130] > VERSION_ID="22.04"
	I0804 08:55:22.300596 1653676 command_runner.go:130] > VERSION="22.04.5 LTS (Jammy Jellyfish)"
	I0804 08:55:22.300602 1653676 command_runner.go:130] > VERSION_CODENAME=jammy
	I0804 08:55:22.300608 1653676 command_runner.go:130] > ID=ubuntu
	I0804 08:55:22.300614 1653676 command_runner.go:130] > ID_LIKE=debian
	I0804 08:55:22.300622 1653676 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0804 08:55:22.300634 1653676 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0804 08:55:22.300652 1653676 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0804 08:55:22.300662 1653676 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0804 08:55:22.300667 1653676 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0804 08:55:22.300719 1653676 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 08:55:22.300753 1653676 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 08:55:22.300768 1653676 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 08:55:22.300780 1653676 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 08:55:22.300795 1653676 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 08:55:22.300857 1653676 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 08:55:22.300964 1653676 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 08:55:22.300977 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> /etc/ssl/certs/15826902.pem
	I0804 08:55:22.301064 1653676 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts -> hosts in /etc/test/nested/copy/1582690
	I0804 08:55:22.301073 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts -> /etc/test/nested/copy/1582690/hosts
	I0804 08:55:22.301115 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1582690
	I0804 08:55:22.308734 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 08:55:22.329778 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts --> /etc/test/nested/copy/1582690/hosts (40 bytes)
	I0804 08:55:22.350435 1653676 start.go:296] duration metric: took 160.385758ms for postStartSetup
	I0804 08:55:22.350534 1653676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 08:55:22.350588 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.367129 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.453443 1653676 command_runner.go:130] > 33%
	I0804 08:55:22.453718 1653676 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 08:55:22.457863 1653676 command_runner.go:130] > 197G
	I0804 08:55:22.457888 1653676 fix.go:56] duration metric: took 1.534232726s for fixHost
	I0804 08:55:22.457898 1653676 start.go:83] releasing machines lock for "functional-699837", held for 1.534258328s
	I0804 08:55:22.457964 1653676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-699837
	I0804 08:55:22.474710 1653676 ssh_runner.go:195] Run: cat /version.json
	I0804 08:55:22.474768 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.474834 1653676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 08:55:22.474905 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.492489 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.492983 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.576302 1653676 command_runner.go:130] > {"iso_version": "v1.36.0-1753487480-21147", "kicbase_version": "v0.0.47-1753871403-21198", "minikube_version": "v1.36.0", "commit": "69470231e9abd2d11a84a83b271e426458d5d12f"}
	I0804 08:55:22.576422 1653676 ssh_runner.go:195] Run: systemctl --version
	I0804 08:55:22.653754 1653676 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0804 08:55:22.655827 1653676 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.16)
	I0804 08:55:22.655870 1653676 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0804 08:55:22.655949 1653676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 08:55:22.659872 1653676 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0804 08:55:22.659895 1653676 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I0804 08:55:22.659905 1653676 command_runner.go:130] > Device: 37h/55d	Inode: 822247      Links: 1
	I0804 08:55:22.659914 1653676 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0804 08:55:22.659929 1653676 command_runner.go:130] > Access: 2025-08-04 08:46:48.521872821 +0000
	I0804 08:55:22.659937 1653676 command_runner.go:130] > Modify: 2025-08-04 08:46:48.497871149 +0000
	I0804 08:55:22.659947 1653676 command_runner.go:130] > Change: 2025-08-04 08:46:48.497871149 +0000
	I0804 08:55:22.659959 1653676 command_runner.go:130] >  Birth: 2025-08-04 08:46:48.497871149 +0000
	I0804 08:55:22.660164 1653676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 08:55:22.676431 1653676 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 08:55:22.676489 1653676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 08:55:22.683904 1653676 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 08:55:22.683925 1653676 start.go:495] detecting cgroup driver to use...
	I0804 08:55:22.683957 1653676 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 08:55:22.684079 1653676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 08:55:22.696848 1653676 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0804 08:55:22.698010 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:23.084233 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 08:55:23.094208 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 08:55:23.103030 1653676 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 08:55:23.103076 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 08:55:23.111645 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 08:55:23.120216 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 08:55:23.128524 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 08:55:23.137020 1653676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 08:55:23.144932 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 08:55:23.153318 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 08:55:23.161730 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 08:55:23.170124 1653676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 08:55:23.176419 1653676 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0804 08:55:23.177058 1653676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 08:55:23.184211 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:23.265466 1653676 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 08:55:23.467281 1653676 start.go:495] detecting cgroup driver to use...
	I0804 08:55:23.467337 1653676 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 08:55:23.467388 1653676 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 08:55:23.477772 1653676 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0804 08:55:23.477865 1653676 command_runner.go:130] > [Unit]
	I0804 08:55:23.477892 1653676 command_runner.go:130] > Description=Docker Application Container Engine
	I0804 08:55:23.477904 1653676 command_runner.go:130] > Documentation=https://docs.docker.com
	I0804 08:55:23.477912 1653676 command_runner.go:130] > BindsTo=containerd.service
	I0804 08:55:23.477924 1653676 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0804 08:55:23.477935 1653676 command_runner.go:130] > Wants=network-online.target
	I0804 08:55:23.477942 1653676 command_runner.go:130] > Requires=docker.socket
	I0804 08:55:23.477950 1653676 command_runner.go:130] > StartLimitBurst=3
	I0804 08:55:23.477958 1653676 command_runner.go:130] > StartLimitIntervalSec=60
	I0804 08:55:23.477963 1653676 command_runner.go:130] > [Service]
	I0804 08:55:23.477971 1653676 command_runner.go:130] > Type=notify
	I0804 08:55:23.477977 1653676 command_runner.go:130] > Restart=on-failure
	I0804 08:55:23.477992 1653676 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0804 08:55:23.478010 1653676 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0804 08:55:23.478023 1653676 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0804 08:55:23.478048 1653676 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0804 08:55:23.478062 1653676 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0804 08:55:23.478073 1653676 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0804 08:55:23.478088 1653676 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0804 08:55:23.478104 1653676 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0804 08:55:23.478125 1653676 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0804 08:55:23.478140 1653676 command_runner.go:130] > ExecStart=
	I0804 08:55:23.478162 1653676 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0804 08:55:23.478451 1653676 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0804 08:55:23.478489 1653676 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0804 08:55:23.478505 1653676 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0804 08:55:23.478520 1653676 command_runner.go:130] > LimitNOFILE=infinity
	I0804 08:55:23.478529 1653676 command_runner.go:130] > LimitNPROC=infinity
	I0804 08:55:23.478536 1653676 command_runner.go:130] > LimitCORE=infinity
	I0804 08:55:23.478544 1653676 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0804 08:55:23.478559 1653676 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0804 08:55:23.478570 1653676 command_runner.go:130] > TasksMax=infinity
	I0804 08:55:23.478576 1653676 command_runner.go:130] > TimeoutStartSec=0
	I0804 08:55:23.478586 1653676 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0804 08:55:23.478592 1653676 command_runner.go:130] > Delegate=yes
	I0804 08:55:23.478606 1653676 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0804 08:55:23.478612 1653676 command_runner.go:130] > KillMode=process
	I0804 08:55:23.478659 1653676 command_runner.go:130] > [Install]
	I0804 08:55:23.478680 1653676 command_runner.go:130] > WantedBy=multi-user.target
	I0804 08:55:23.480586 1653676 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 08:55:23.480654 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 08:55:23.491375 1653676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 08:55:23.505761 1653676 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0804 08:55:23.506806 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:23.923432 1653676 ssh_runner.go:195] Run: which cri-dockerd
	I0804 08:55:23.926961 1653676 command_runner.go:130] > /usr/bin/cri-dockerd
	I0804 08:55:23.927156 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 08:55:23.935149 1653676 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 08:55:23.950832 1653676 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 08:55:24.042992 1653676 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 08:55:24.297851 1653676 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 08:55:24.297998 1653676 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 08:55:24.377001 1653676 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 08:55:24.388783 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:24.510366 1653676 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 08:55:24.982429 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 08:55:24.992600 1653676 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0804 08:55:25.006985 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 08:55:25.016432 1653676 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 08:55:25.099651 1653676 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 08:55:25.175485 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:25.251241 1653676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 08:55:25.263161 1653676 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 08:55:25.272497 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:25.348098 1653676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 08:55:25.408736 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 08:55:25.419584 1653676 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 08:55:25.419655 1653676 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 08:55:25.422672 1653676 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0804 08:55:25.422693 1653676 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0804 08:55:25.422702 1653676 command_runner.go:130] > Device: 45h/69d	Inode: 1258        Links: 1
	I0804 08:55:25.422711 1653676 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0804 08:55:25.422722 1653676 command_runner.go:130] > Access: 2025-08-04 08:55:25.353889433 +0000
	I0804 08:55:25.422730 1653676 command_runner.go:130] > Modify: 2025-08-04 08:55:25.353889433 +0000
	I0804 08:55:25.422743 1653676 command_runner.go:130] > Change: 2025-08-04 08:55:25.357889711 +0000
	I0804 08:55:25.422749 1653676 command_runner.go:130] >  Birth: -
	I0804 08:55:25.422776 1653676 start.go:563] Will wait 60s for crictl version
	I0804 08:55:25.422814 1653676 ssh_runner.go:195] Run: which crictl
	I0804 08:55:25.425611 1653676 command_runner.go:130] > /usr/bin/crictl
	I0804 08:55:25.425730 1653676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 08:55:25.455697 1653676 command_runner.go:130] > Version:  0.1.0
	I0804 08:55:25.455721 1653676 command_runner.go:130] > RuntimeName:  docker
	I0804 08:55:25.455727 1653676 command_runner.go:130] > RuntimeVersion:  28.3.3
	I0804 08:55:25.455733 1653676 command_runner.go:130] > RuntimeApiVersion:  v1
	I0804 08:55:25.458002 1653676 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 08:55:25.458069 1653676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 08:55:25.480067 1653676 command_runner.go:130] > 28.3.3
	I0804 08:55:25.481564 1653676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 08:55:25.502625 1653676 command_runner.go:130] > 28.3.3
	I0804 08:55:25.506722 1653676 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 08:55:25.506807 1653676 cli_runner.go:164] Run: docker network inspect functional-699837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 08:55:25.523376 1653676 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0804 08:55:25.526929 1653676 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I0804 08:55:25.527043 1653676 kubeadm.go:875] updating cluster {Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 08:55:25.527223 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:25.922076 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:26.309911 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:26.726305 1653676 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 08:55:26.726461 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:27.101061 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:27.477147 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:27.859614 1653676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 08:55:27.878541 1653676 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	I0804 08:55:27.878563 1653676 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	I0804 08:55:27.878570 1653676 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	I0804 08:55:27.878580 1653676 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.34.0-beta.0
	I0804 08:55:27.878585 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.6.1-1
	I0804 08:55:27.878590 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.5.21-0
	I0804 08:55:27.878595 1653676 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.12.1
	I0804 08:55:27.878599 1653676 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0804 08:55:27.878603 1653676 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 08:55:27.879821 1653676 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 08:55:27.879847 1653676 docker.go:633] Images already preloaded, skipping extraction
	I0804 08:55:27.879906 1653676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 08:55:27.898058 1653676 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	I0804 08:55:27.898084 1653676 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	I0804 08:55:27.898091 1653676 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	I0804 08:55:27.898095 1653676 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.34.0-beta.0
	I0804 08:55:27.898099 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.6.1-1
	I0804 08:55:27.898103 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.5.21-0
	I0804 08:55:27.898109 1653676 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.12.1
	I0804 08:55:27.898113 1653676 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0804 08:55:27.898117 1653676 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 08:55:27.898143 1653676 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 08:55:27.898167 1653676 cache_images.go:85] Images are preloaded, skipping loading
	I0804 08:55:27.898180 1653676 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0-beta.0 docker true true} ...
	I0804 08:55:27.898290 1653676 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-699837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 08:55:27.898340 1653676 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 08:55:27.944494 1653676 command_runner.go:130] > cgroupfs
	I0804 08:55:27.946023 1653676 cni.go:84] Creating CNI manager for ""
	I0804 08:55:27.946045 1653676 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 08:55:27.946061 1653676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 08:55:27.946082 1653676 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-699837 NodeName:functional-699837 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 08:55:27.946247 1653676 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-699837"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 08:55:27.946320 1653676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 08:55:27.953892 1653676 command_runner.go:130] > kubeadm
	I0804 08:55:27.953910 1653676 command_runner.go:130] > kubectl
	I0804 08:55:27.953915 1653676 command_runner.go:130] > kubelet
	I0804 08:55:27.954677 1653676 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 08:55:27.954730 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 08:55:27.962553 1653676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0804 08:55:27.978365 1653676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 08:55:27.994068 1653676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0804 08:55:28.009976 1653676 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0804 08:55:28.013276 1653676 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0804 08:55:28.013353 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:28.101449 1653676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 08:55:28.112250 1653676 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837 for IP: 192.168.49.2
	I0804 08:55:28.112270 1653676 certs.go:194] generating shared ca certs ...
	I0804 08:55:28.112291 1653676 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.112464 1653676 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 08:55:28.112506 1653676 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 08:55:28.112516 1653676 certs.go:256] generating profile certs ...
	I0804 08:55:28.112631 1653676 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.key
	I0804 08:55:28.112686 1653676 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key.5971bdc2
	I0804 08:55:28.112722 1653676 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key
	I0804 08:55:28.112733 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 08:55:28.112747 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 08:55:28.112759 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 08:55:28.112772 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 08:55:28.112783 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 08:55:28.112795 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 08:55:28.112808 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 08:55:28.112819 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 08:55:28.112866 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 08:55:28.112898 1653676 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 08:55:28.112907 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 08:55:28.112929 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 08:55:28.112954 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 08:55:28.112975 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 08:55:28.113011 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 08:55:28.113036 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.113051 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.113068 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem -> /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.113660 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 08:55:28.135009 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 08:55:28.155784 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 08:55:28.176520 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 08:55:28.197558 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 08:55:28.218349 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 08:55:28.239391 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 08:55:28.259973 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 08:55:28.280899 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 08:55:28.301872 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 08:55:28.322816 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 08:55:28.343561 1653676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 08:55:28.359122 1653676 ssh_runner.go:195] Run: openssl version
	I0804 08:55:28.363884 1653676 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0804 08:55:28.364128 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 08:55:28.372266 1653676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.375320 1653676 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.375365 1653676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.375402 1653676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.381281 1653676 command_runner.go:130] > b5213941
	I0804 08:55:28.381530 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 08:55:28.388997 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 08:55:28.397048 1653676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.399946 1653676 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.399991 1653676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.400016 1653676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.406052 1653676 command_runner.go:130] > 51391683
	I0804 08:55:28.406304 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 08:55:28.413987 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 08:55:28.422286 1653676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.425317 1653676 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.425349 1653676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.425376 1653676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.431562 1653676 command_runner.go:130] > 3ec20f2e
	I0804 08:55:28.431844 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 08:55:28.439543 1653676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 08:55:28.442556 1653676 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 08:55:28.442581 1653676 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0804 08:55:28.442590 1653676 command_runner.go:130] > Device: 801h/2049d	Inode: 822354      Links: 1
	I0804 08:55:28.442597 1653676 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0804 08:55:28.442603 1653676 command_runner.go:130] > Access: 2025-08-04 08:51:18.188665144 +0000
	I0804 08:55:28.442607 1653676 command_runner.go:130] > Modify: 2025-08-04 08:47:12.683556584 +0000
	I0804 08:55:28.442614 1653676 command_runner.go:130] > Change: 2025-08-04 08:47:12.683556584 +0000
	I0804 08:55:28.442619 1653676 command_runner.go:130] >  Birth: 2025-08-04 08:47:12.683556584 +0000
	I0804 08:55:28.442691 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 08:55:28.448546 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.448806 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 08:55:28.454608 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.454889 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 08:55:28.460580 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.460805 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 08:55:28.466615 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.466839 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 08:55:28.472661 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.472705 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 08:55:28.478445 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.478508 1653676 kubeadm.go:392] StartCluster: {Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 08:55:28.478619 1653676 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 08:55:28.496419 1653676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 08:55:28.503804 1653676 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0804 08:55:28.503825 1653676 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0804 08:55:28.503833 1653676 command_runner.go:130] > /var/lib/minikube/etcd:
	I0804 08:55:28.504531 1653676 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 08:55:28.504546 1653676 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0804 08:55:28.504584 1653676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 08:55:28.511980 1653676 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 08:55:28.512384 1653676 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-699837" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.512513 1653676 kubeconfig.go:62] /home/jenkins/minikube-integration/21223-1578987/kubeconfig needs updating (will repair): [kubeconfig missing "functional-699837" cluster setting kubeconfig missing "functional-699837" context setting]
	I0804 08:55:28.512791 1653676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.513199 1653676 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.513384 1653676 kapi.go:59] client config for functional-699837: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt", KeyFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.key", CAFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2595680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0804 08:55:28.513811 1653676 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0804 08:55:28.513826 1653676 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0804 08:55:28.513833 1653676 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0804 08:55:28.513839 1653676 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0804 08:55:28.513844 1653676 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0804 08:55:28.513876 1653676 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0804 08:55:28.514257 1653676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 08:55:28.521605 1653676 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0804 08:55:28.521634 1653676 kubeadm.go:593] duration metric: took 17.082556ms to restartPrimaryControlPlane
	I0804 08:55:28.521645 1653676 kubeadm.go:394] duration metric: took 43.142663ms to StartCluster
	I0804 08:55:28.521666 1653676 settings.go:142] acquiring lock: {Name:mk3d97f9903fe59355ed92bb92489c9b9834574a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.521736 1653676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.522230 1653676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.522435 1653676 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 08:55:28.522512 1653676 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 08:55:28.522651 1653676 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 08:55:28.522656 1653676 addons.go:69] Setting storage-provisioner=true in profile "functional-699837"
	I0804 08:55:28.522728 1653676 addons.go:238] Setting addon storage-provisioner=true in "functional-699837"
	I0804 08:55:28.522681 1653676 addons.go:69] Setting default-storageclass=true in profile "functional-699837"
	I0804 08:55:28.522800 1653676 host.go:66] Checking if "functional-699837" exists ...
	I0804 08:55:28.522810 1653676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-699837"
	I0804 08:55:28.523050 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:28.523236 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:28.524415 1653676 out.go:177] * Verifying Kubernetes components...
	I0804 08:55:28.525459 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:28.542729 1653676 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.542941 1653676 kapi.go:59] client config for functional-699837: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt", KeyFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.key", CAFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2595680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0804 08:55:28.543225 1653676 addons.go:238] Setting addon default-storageclass=true in "functional-699837"
	I0804 08:55:28.543255 1653676 host.go:66] Checking if "functional-699837" exists ...
	I0804 08:55:28.543552 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:28.543853 1653676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 08:55:28.545053 1653676 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:28.545072 1653676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 08:55:28.545126 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:28.560950 1653676 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:28.560976 1653676 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 08:55:28.561028 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:28.561396 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:28.582841 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:28.617980 1653676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 08:55:28.628515 1653676 node_ready.go:35] waiting up to 6m0s for node "functional-699837" to be "Ready" ...
	I0804 08:55:28.628655 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:28.628715 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:28.628984 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:28.669259 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:28.681042 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:28.723292 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:28.723334 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.723359 1653676 retry.go:31] will retry after 184.647945ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.732373 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:28.732422 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.732443 1653676 retry.go:31] will retry after 304.201438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.908717 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:28.958881 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:28.958925 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.958945 1653676 retry.go:31] will retry after 476.117899ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.037179 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:29.088413 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:29.088468 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.088491 1653676 retry.go:31] will retry after 197.264107ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.129639 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:29.129716 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:29.130032 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:29.286304 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:29.334473 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:29.337029 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.337065 1653676 retry.go:31] will retry after 823.238005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.435237 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:29.482679 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:29.485403 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.485436 1653676 retry.go:31] will retry after 800.644745ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.629726 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:29.629799 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:29.630104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:30.128837 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:30.128917 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:30.129285 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:30.161434 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:30.213167 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.213231 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.213275 1653676 retry.go:31] will retry after 656.353253ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.286342 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:30.334470 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.336981 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.337012 1653676 retry.go:31] will retry after 508.253019ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.629489 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:30.629586 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:30.629950 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:30.630017 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:30.845486 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:30.869953 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:30.897779 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.897836 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.897862 1653676 retry.go:31] will retry after 1.094600532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.922225 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.922291 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.922314 1653676 retry.go:31] will retry after 805.303636ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:31.129681 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:31.129760 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:31.130110 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:31.628691 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:31.628775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:31.629122 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:31.728325 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:31.779677 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:31.779728 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:31.779748 1653676 retry.go:31] will retry after 2.236258385s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:31.993064 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:32.044458 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:32.044511 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:32.044552 1653676 retry.go:31] will retry after 1.503507165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:32.129706 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:32.129775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:32.130079 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:32.629732 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:32.629813 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:32.630171 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:32.630256 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:33.128768 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:33.128853 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:33.129210 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:33.548844 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:33.599998 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:33.600058 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:33.600081 1653676 retry.go:31] will retry after 1.994543648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:33.629251 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:33.629339 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:33.629634 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:34.017206 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:34.068508 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:34.068573 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:34.068597 1653676 retry.go:31] will retry after 3.823609715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:34.128678 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:34.128751 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:34.129067 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:34.629688 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:34.629764 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:34.630098 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:35.129721 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:35.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:35.130115 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:35.130189 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:35.595749 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:35.629120 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:35.629209 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:35.629582 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:35.645323 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:35.647845 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:35.647880 1653676 retry.go:31] will retry after 3.559085278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:36.129701 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:36.129780 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:36.130117 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:36.628869 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:36.628953 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:36.629336 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:37.129085 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:37.129171 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:37.129515 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:37.629335 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:37.629411 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:37.629704 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:37.629765 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:37.893118 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:37.941760 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:37.944423 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:37.944452 1653676 retry.go:31] will retry after 4.996473933s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:38.128782 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:38.128878 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:38.129260 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:38.628699 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:38.628786 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:38.629112 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:39.128699 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:39.128786 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:39.129139 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:39.207320 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:39.257569 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:39.257615 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:39.257640 1653676 retry.go:31] will retry after 8.124151658s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:39.629122 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:39.629208 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:39.629537 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:40.129218 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:40.129325 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:40.129628 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:40.129693 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:40.629297 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:40.629368 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:40.629673 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:41.129406 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:41.129495 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:41.129887 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:41.629498 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:41.629579 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:41.629928 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:42.129549 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:42.129645 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:42.130002 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:42.130063 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:42.629629 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:42.629709 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:42.630062 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:42.941490 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:42.990741 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:42.993232 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:42.993279 1653676 retry.go:31] will retry after 4.825851231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:43.129602 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:43.129690 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:43.130065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:43.628834 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:43.628909 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:43.629270 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:44.129025 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:44.129120 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:44.129526 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:44.629359 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:44.629431 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:44.629737 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:44.629803 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:45.129549 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:45.129627 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:45.129961 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:45.628704 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:45.628789 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:45.629130 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:46.128858 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:46.128936 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:46.129295 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:46.629013 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:46.629096 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:46.629444 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:47.129179 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:47.129266 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:47.129609 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:47.129674 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:47.381978 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:47.430195 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:47.433093 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:47.433123 1653676 retry.go:31] will retry after 10.012002454s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:47.629500 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:47.629573 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:47.629910 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:47.820313 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:47.870430 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:47.870476 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:47.870493 1653676 retry.go:31] will retry after 10.075489679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:48.128804 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:48.128895 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:48.129267 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:48.629030 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:48.629141 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:48.629503 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:49.129320 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:49.129409 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:49.129785 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:49.129864 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:49.629600 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:49.629674 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:49.629992 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:50.128745 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:50.128835 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:50.129191 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:50.628937 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:50.629015 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:50.629395 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:51.128731 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:51.128818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:51.129169 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:51.628936 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:51.629009 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:51.629384 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:51.629473 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:52.129137 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:52.129221 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:52.129575 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:52.629361 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:52.629431 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:52.629735 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:53.129540 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:53.129620 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:53.129949 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:53.628671 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:53.628747 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:53.629071 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:54.128801 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:54.128899 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:54.129261 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:54.129334 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:54.629005 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:54.629105 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:54.629481 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:55.129371 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:55.129447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:55.129804 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:55.629597 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:55.629674 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:55.630007 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:56.128707 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:56.128802 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:57.445382 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:57.946208 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:56:06.129570 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10000
	W0804 08:56:06.129644 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:56:06.129694 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:06.129736 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:16.130254 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10000
	W0804 08:56:16.130338 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:56:16.130408 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:16.130480 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:16.262782 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=132
	I0804 08:56:17.263910 1653676 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8441/api/v1/nodes/functional-699837"
	I0804 08:56:17.264149 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:17.264472 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:17.264610 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:17.264716 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:17.264973 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:17.267370 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38248->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267420 1653676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (19.822003727s)
	W0804 08:56:17.267450 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38248->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267470 1653676 retry.go:31] will retry after 18.146841122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38248->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267784 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38252->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267815 1653676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (19.321577292s)
	W0804 08:56:17.267836 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38252->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267852 1653676 retry.go:31] will retry after 19.077492147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38252->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.629331 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:17.629410 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:17.629777 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:18.129400 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:18.129489 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:18.129796 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:18.629536 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:18.629618 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:18.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:18.630021 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:19.129659 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:19.129746 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:19.130112 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:19.628758 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:19.628835 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:19.629178 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:20.128732 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:20.128806 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:20.129156 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:20.628674 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:20.628755 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:20.629081 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:21.128792 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:21.128867 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:21.129234 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:21.129324 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:21.629020 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:21.629101 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:21.629489 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:22.129299 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:22.129389 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:22.129751 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:22.629584 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:22.629664 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:22.629996 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:23.128722 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:23.128828 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:23.129192 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:23.628966 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:23.629055 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:23.629374 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:23.629437 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:24.129128 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:24.129225 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:24.129600 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:24.629381 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:24.629467 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:24.629838 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:25.129635 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:25.129755 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:25.130108 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:25.628815 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:25.628905 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:25.629282 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:26.128941 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:26.129024 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:26.129386 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:26.129469 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:26.629153 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:26.629266 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:26.629626 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:27.129444 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:27.129526 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:27.129867 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:27.629658 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:27.629737 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:27.630140 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:28.128857 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:28.128947 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:28.129307 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:28.629734 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:28.629837 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:28.630240 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:28.630338 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:29.129055 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:29.129168 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:29.129536 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:29.629363 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:29.629443 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:29.629791 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:30.129636 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:30.129710 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:30.130048 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:30.628774 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:30.628849 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:30.629212 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:31.128887 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:31.128984 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:31.129358 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:31.129426 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:31.629089 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:31.629164 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:31.629502 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:32.129335 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:32.129440 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:32.129852 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:32.629638 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:32.629720 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:32.630056 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:33.128794 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:33.128882 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:33.129261 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:33.628999 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:33.629072 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:33.629432 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:33.629497 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:34.129184 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:34.129308 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:34.129684 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:34.629474 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:34.629546 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:34.629872 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:35.129661 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:35.129748 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:35.130119 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:35.414447 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:56:35.463330 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:35.466231 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:35.466267 1653676 retry.go:31] will retry after 13.873476046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:35.629483 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:35.629558 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:35.629897 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:35.629960 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:36.129639 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:36.129713 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:36.130046 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:36.346375 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:56:36.394439 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:36.396962 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:36.396996 1653676 retry.go:31] will retry after 20.764306788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:36.629373 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:36.629461 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:36.629797 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:37.129619 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:37.129700 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:37.130049 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:37.628786 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:37.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:37.629214 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:38.129001 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:38.129075 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:38.129435 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:38.129504 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:38.629094 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:38.629186 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:38.629537 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:39.129329 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:39.129403 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:39.129733 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:39.629535 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:39.629607 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:39.629940 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:40.129719 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:40.129801 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:40.130145 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:40.130216 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:40.628884 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:40.628964 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:40.629317 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:41.128956 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:41.129035 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:41.129355 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:41.629076 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:41.629150 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:41.629485 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:42.129286 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:42.129362 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:42.129691 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:42.629456 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:42.629537 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:42.629869 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:42.629938 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:43.129673 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:43.129756 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:43.130100 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:43.628809 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:43.628889 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:43.629208 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:44.128939 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:44.129019 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:44.129378 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:44.629097 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:44.629182 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:44.629521 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:45.129310 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:45.129387 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:45.129760 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:45.129832 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:45.629562 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:45.629633 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:45.630029 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:46.128691 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:46.128772 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:46.129112 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:46.628845 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:46.628920 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:46.629291 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:47.129029 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:47.129126 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:47.129500 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:47.629337 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:47.629420 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:47.629741 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:47.629802 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:48.129626 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:48.129722 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:48.130077 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:48.628742 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:48.628836 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:48.629189 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:49.128743 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:49.128827 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:49.129185 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:49.340493 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:56:49.391267 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:49.391322 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:49.391344 1653676 retry.go:31] will retry after 22.530122873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:49.629701 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:49.629775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:49.630094 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:49.630167 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:50.128781 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:50.128853 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:50.129231 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:50.628838 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:50.628912 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:50.629276 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:51.129234 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:51.129318 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:51.129637 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:51.629350 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:51.629441 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:51.629759 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:52.129549 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:52.129656 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:52.129995 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:52.130058 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:52.628710 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:52.628778 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:52.629090 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:53.128873 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:53.128994 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:53.129417 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:53.629155 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:53.629225 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:53.629551 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:54.129336 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:54.129409 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:54.129789 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:54.629582 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:54.629657 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:54.629978 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:54.630042 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:55.128737 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:55.128827 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:55.129209 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:55.629562 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:55.629630 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:55.629995 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:56.129718 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:56.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:56.130127 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:56.628839 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:56.628957 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:56.629326 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:57.129049 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:57.129165 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:57.129545 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:57.129614 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:57.161690 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:56:57.212094 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:57.212172 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:57.212321 1653676 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 08:56:57.629703 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:57.629786 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:57.630137 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:58.128910 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:58.128986 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:58.129348 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:58.629128 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:58.629212 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:58.629557 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:59.129348 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:59.129423 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:59.129768 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:59.129831 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:59.629552 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:59.629630 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:59.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:00.128668 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:00.128748 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:00.129104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:00.628883 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:00.628972 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:00.629344 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:01.128990 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:01.129091 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:01.129447 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:01.629187 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:01.629284 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:01.629625 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:01.629697 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:02.129438 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:02.129511 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:02.129847 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:02.629620 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:02.629714 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:02.630041 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:03.128760 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:03.128862 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:03.129196 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:03.628968 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:03.629065 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:03.629415 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:04.129145 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:04.129220 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:04.129570 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:04.129643 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:04.629351 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:04.629445 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:04.629746 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:05.129583 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:05.129661 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:05.129993 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:05.628708 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:05.628794 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:05.629079 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:06.128832 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:06.128925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:06.129318 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:06.629043 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:06.629138 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:06.629480 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:06.629558 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:07.129326 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:07.129425 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:07.129785 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:07.629601 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:07.629694 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:07.630065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:08.128801 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:08.128909 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:08.129315 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:08.629044 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:08.629145 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:08.629528 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:08.629593 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:09.129358 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:09.129453 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:09.129910 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:09.629675 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:09.629754 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:09.630073 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:10.128808 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:10.128885 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:10.129234 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:10.628993 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:10.629089 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:10.629434 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:11.129231 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:11.129347 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:11.129707 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:11.129770 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:11.629527 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:11.629607 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:11.629894 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:11.922305 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:57:11.970691 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:57:11.973096 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:57:11.973263 1653676 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 08:57:11.975142 1653676 out.go:177] * Enabled addons: 
	I0804 08:57:11.976503 1653676 addons.go:514] duration metric: took 1m43.454009966s for enable addons: enabled=[]
	I0804 08:57:12.129480 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:12.129579 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:12.129915 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:12.629535 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:12.629640 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:12.629960 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:13.129603 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:13.129676 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:13.130018 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:13.130084 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:13.629651 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:13.629730 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:13.630028 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:14.129674 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:14.129818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:14.130187 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:14.628738 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:14.628810 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:14.629106 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:15.128681 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:15.128756 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:15.129116 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:15.628700 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:15.628781 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:15.629089 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:15.629155 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:16.128845 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:16.128921 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:16.129302 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:16.628840 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:16.628918 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:16.629233 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:17.128809 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:17.128893 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:17.129257 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:17.628792 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:17.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:17.629202 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:17.629293 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:18.128759 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:18.128847 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:18.129200 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:18.629041 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:18.629121 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:18.629468 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:19.129039 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:19.129112 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:19.129489 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:19.629035 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:19.629105 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:19.629466 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:19.629532 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:20.129056 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:20.129136 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:20.129527 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:20.629075 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:20.629154 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:20.629482 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:21.129294 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:21.129367 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:21.129717 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:21.629359 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:21.629463 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:21.629764 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:21.629831 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:22.129365 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:22.129439 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:22.129781 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:22.629426 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:22.629501 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:22.629789 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:23.129450 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:23.129535 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:23.129870 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:23.629332 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:23.629418 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:23.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:24.128868 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:24.128960 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:24.129333 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:24.129416 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:24.628863 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:24.628939 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:24.629295 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:25.128809 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:25.128887 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:25.129269 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:25.629006 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:25.629081 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:25.629396 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:26.129192 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:26.129303 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:26.129672 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:26.129741 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:26.629536 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:26.629611 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:26.629914 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:27.129705 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:27.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:27.130156 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:27.628879 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:27.628961 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:27.629280 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:28.129023 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:28.129114 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:28.129510 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:28.629296 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:28.629387 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:28.629697 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:28.629765 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:29.129519 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:29.129613 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:29.129968 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:29.628696 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:29.628770 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:29.629059 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:30.128786 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:30.128880 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:30.129235 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:30.628979 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:30.629054 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:30.629304 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:31.129276 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:31.129363 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:31.129719 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:31.129793 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:31.629528 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:31.629615 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:31.629920 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:32.128690 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:32.128765 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:32.129098 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:32.628838 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:32.628956 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:32.629288 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:33.129003 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:33.129091 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:33.129461 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:33.629193 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:33.629295 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:33.629610 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:33.629682 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:34.129449 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:34.129539 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:34.129898 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:34.629687 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:34.629766 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:34.630068 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:35.128782 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:35.128868 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:35.129222 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:35.628979 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:35.629051 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:35.629387 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:36.129189 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:36.129297 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:36.129671 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:36.129763 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:36.629508 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:36.629584 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:36.629873 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:37.129696 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:37.129776 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:37.130132 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:37.628857 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:37.628938 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:37.629221 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:38.128990 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:38.129078 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:38.129487 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:38.629184 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:38.629289 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:38.629594 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:38.629667 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:39.129364 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:39.129441 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:39.129810 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:39.629603 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:39.629674 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:39.629968 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:40.128718 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:40.128797 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:40.129178 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:40.628945 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:40.629021 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:40.629364 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:41.129136 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:41.129253 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:41.129612 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:41.129682 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:41.629452 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:41.629530 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:41.629831 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:42.129618 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:42.129707 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:42.130079 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:42.628760 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:42.628838 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:42.629155 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:43.128868 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:43.128970 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:43.129365 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:43.629090 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:43.629163 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:43.629503 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:43.629565 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:44.129335 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:44.129433 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:44.129785 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:44.629577 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:44.629649 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:44.629949 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:45.128664 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:45.128759 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:45.129131 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:45.628854 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:45.628932 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:45.629229 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:46.128970 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:46.129047 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:46.129442 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:46.129517 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:46.629268 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:46.629344 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:46.629668 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:47.129457 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:47.129529 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:47.129867 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:47.629659 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:47.629734 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:47.630045 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:48.128764 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:48.128839 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:48.129183 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:48.628996 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:48.629085 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:48.629417 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:48.629493 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:49.129179 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:49.129288 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:49.129668 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:49.629441 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:49.629513 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:49.629806 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:50.129603 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:50.129678 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:50.130019 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:50.628730 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:50.628803 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:50.629119 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:51.128835 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:51.128916 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:51.129293 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:51.129364 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:51.629058 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:51.629136 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:51.629474 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:52.129201 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:52.129298 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:52.129723 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:52.629568 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:52.629654 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:52.630018 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:53.128764 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:53.128844 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:53.129204 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:53.628946 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:53.629019 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:53.629368 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:53.629442 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:54.129146 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:54.129225 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:54.129608 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:54.629341 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:54.629417 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:54.629719 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:55.129545 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:55.129619 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:55.129967 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:55.628701 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:55.628776 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:55.629095 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:56.128809 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:56.128887 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:56.129279 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:56.129347 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:56.629019 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:56.629096 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:56.629435 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:57.129166 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:57.129283 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:57.129655 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:57.629456 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:57.629534 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:57.629859 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:58.129657 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:58.129755 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:58.130109 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:58.130182 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:58.628778 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:58.628892 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:58.629216 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:59.128942 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:59.129046 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:59.129427 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:59.629154 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:59.629257 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:59.629579 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:00.129357 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:00.129459 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:00.129797 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:00.629587 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:00.629677 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:00.630022 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:00.630087 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:01.128755 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:01.128831 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:01.129179 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:01.628959 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:01.629054 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:01.629420 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:02.129182 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:02.129295 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:02.129668 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:02.629476 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:02.629572 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:02.629862 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:03.129679 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:03.129759 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:03.130099 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:03.130172 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:03.628846 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:03.628948 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:03.629308 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:04.129055 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:04.129134 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:04.129501 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:04.629285 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:04.629371 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:04.629678 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:05.129485 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:05.129556 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:05.129895 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:05.629689 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:05.629775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:05.630092 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:05.630166 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:06.128794 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:06.128884 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:06.129262 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:06.628981 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:06.629094 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:06.629442 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:07.129153 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:07.129236 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:07.129612 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:07.629373 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:07.629460 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:07.629767 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:08.129560 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:08.129642 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:08.129999 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:08.130067 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:08.628667 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:08.628761 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:08.629105 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:09.128826 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:09.128902 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:09.129208 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:09.628951 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:09.629038 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:09.629355 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:10.129067 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:10.129144 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:10.129526 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:10.629346 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:10.629440 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:10.629755 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:10.629825 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:11.129536 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:11.129607 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:11.129931 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:11.628656 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:11.628740 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:11.629041 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:12.128773 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:12.128847 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:12.129188 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:12.628944 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:12.629039 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:12.629370 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:13.129112 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:13.129185 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:13.129528 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:13.129601 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:13.628854 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:13.628929 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:13.629262 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:14.129022 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:14.129107 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:14.129456 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:14.629179 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:14.629262 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:14.629560 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:15.129358 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:15.129438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:15.129768 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:15.129842 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:15.629588 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:15.629663 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:15.629993 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:16.128722 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:16.128807 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:16.129155 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:16.628888 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:16.628968 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:16.629289 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:17.128871 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:17.128958 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:17.129331 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:17.629089 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:17.629163 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:17.629498 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:17.629579 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:18.129331 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:18.129413 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:18.129748 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:18.629352 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:18.629431 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:18.629731 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:19.129531 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:19.129601 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:19.129926 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:19.629715 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:19.629793 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:19.630096 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:19.630165 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:20.128817 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:20.128892 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:20.129221 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:20.628986 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:20.629062 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:20.629379 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:21.129140 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:21.129256 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:21.129611 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:21.629346 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:21.629422 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:21.629705 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:22.129503 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:22.129592 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:22.129936 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:22.130013 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:22.628702 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:22.628771 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:22.629065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:23.128773 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:23.128856 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:23.129193 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:23.628915 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:23.629017 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:23.629329 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:24.129041 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:24.129130 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:24.129485 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:24.629265 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:24.629368 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:24.629656 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:24.629721 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:25.129446 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:25.129542 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:25.129838 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:25.629614 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:25.629692 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:25.630005 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:26.128734 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:26.128822 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:26.129143 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:26.628855 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:26.628945 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:26.629295 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:27.129001 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:27.129078 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:27.129430 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:27.129497 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:27.629154 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:27.629226 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:27.629562 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:28.129344 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:28.129447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:28.129769 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:28.629456 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:28.629542 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:28.629856 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:29.129664 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:29.129750 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:29.130110 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:29.130200 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:29.628750 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:29.628825 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:29.629116 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:30.128860 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:30.128943 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:30.129300 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:30.629025 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:30.629107 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:30.629409 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:31.129309 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:31.129383 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:31.129732 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:31.629506 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:31.629578 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:31.629869 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:31.629930 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:32.129669 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:32.129745 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:32.130096 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:32.628810 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:32.628890 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:32.629161 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:33.128895 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:33.128972 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:33.129352 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:33.629078 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:33.629161 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:33.629537 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:34.129351 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:34.129430 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:34.129807 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:34.129887 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:34.629642 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:34.629714 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:34.630028 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:35.128785 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:35.128867 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:35.129207 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:35.628963 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:35.629038 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:35.629350 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:36.129133 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:36.129206 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:36.129495 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:36.629057 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:36.629152 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:36.629476 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:36.629541 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:37.129344 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:37.129435 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:37.129779 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:37.629589 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:37.629665 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:37.629987 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:38.128723 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:38.128818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:38.129170 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:38.628949 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:38.629043 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:38.629367 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:39.129078 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:39.129177 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:39.129555 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:39.129622 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:39.629381 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:39.629467 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:39.629800 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:40.129606 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:40.129705 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:40.130062 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:40.628786 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:40.628889 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:40.629233 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:41.129024 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:41.129100 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:41.129462 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:41.629280 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:41.629379 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:41.629701 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:41.629762 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:42.129521 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:42.129597 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:42.129950 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:42.628667 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:42.628756 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:42.629073 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:43.128819 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:43.128897 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:43.129279 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:43.629033 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:43.629148 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:43.629489 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:44.129324 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:44.129407 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:44.129750 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:44.129816 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:44.629574 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:44.629658 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:44.629972 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:45.128703 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:45.128778 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:45.129125 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:45.628842 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:45.628933 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:45.629252 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:46.128948 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:46.129033 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:46.129380 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:46.629108 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:46.629185 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:46.629520 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:46.629580 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:47.129340 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:47.129419 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:47.129767 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:47.629563 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:47.629638 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:47.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:48.128670 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:48.128751 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:48.129104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:48.629702 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:48.629776 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:48.630085 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:48.630146 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:49.128823 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:49.128899 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:49.129229 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:49.628981 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:49.629065 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:49.629392 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:50.129122 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:50.129198 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:50.129554 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:50.629352 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:50.629447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:50.629788 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:51.129551 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:51.129636 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:51.129966 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:51.130030 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:51.628723 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:51.628822 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:51.629134 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:52.128861 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:52.128966 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:52.129334 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:52.629047 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:52.629124 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:52.629436 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:53.129166 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:53.129271 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:53.129578 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:53.629347 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:53.629425 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:53.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:53.629789 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:54.129531 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:54.129608 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:54.130022 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:54.628732 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:54.628807 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:54.629107 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:55.128818 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:55.128901 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:55.129281 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:55.629003 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:55.629084 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:55.629411 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:56.129310 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:56.129399 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:56.129752 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:56.129817 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:56.629559 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:56.629638 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:56.629927 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:57.129729 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:57.129818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:57.130192 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:57.628939 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:57.629019 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:57.629349 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:58.129065 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:58.129186 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:58.129616 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:58.629318 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:58.629398 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:58.629699 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:58.629757 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:59.129513 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:59.129603 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:59.129965 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:59.628703 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:59.628781 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:59.629083 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:00.128805 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:00.128896 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:00.129279 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:00.629019 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:00.629098 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:00.629464 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:01.129270 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:01.129348 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:01.129717 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:01.129794 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:01.629537 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:01.629608 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:01.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:02.128689 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:02.128769 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:02.129142 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:02.628902 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:02.628987 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:02.629315 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:03.129038 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:03.129117 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:03.129496 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:03.629371 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:03.629457 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:03.629773 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:03.629837 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:04.129591 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:04.129684 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:14.133399 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10003
	W0804 08:59:14.133474 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:59:14.133535 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:14.133571 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:24.134577 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10000
	W0804 08:59:24.134670 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:59:24.134743 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:24.134791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:24.447100 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=312
	I0804 08:59:25.448003 1653676 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8441/api/v1/nodes/functional-699837"
	I0804 08:59:25.448109 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:25.448371 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:25.448473 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:25.448503 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:25.448708 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:25.629198 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:25.629320 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:25.629693 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:26.129362 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:26.129438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:26.129786 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:26.629562 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:26.629634 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:26.629913 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:26.629981 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:27.129710 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:27.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:27.130145 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:27.628843 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:27.628915 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:27.629211 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:28.128958 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:28.129049 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:28.129414 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:28.629057 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:28.629131 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:28.629437 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:29.129142 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:29.129215 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:29.129570 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:29.129634 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:29.629351 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:29.629434 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:29.629732 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:30.129550 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:30.129627 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:30.129981 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:30.628711 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:30.628785 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:30.629088 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:31.128761 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:31.128837 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:31.129194 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:31.628935 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:31.629013 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:31.629357 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:31.629423 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:32.129102 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:32.129207 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:32.129598 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:32.629343 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:32.629412 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:32.629682 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:33.129483 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:33.129571 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:33.129937 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:33.628685 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:33.628761 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:33.629071 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:34.128794 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:34.128880 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:34.129196 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:34.129292 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:34.628955 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:34.629026 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:34.629332 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:35.129092 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:35.129172 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:35.129540 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:35.629393 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:35.629466 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:35.629788 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:36.129551 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:36.129629 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:36.129981 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:36.130049 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:36.628714 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:36.628796 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:36.629109 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:37.128919 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:37.128993 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:37.129345 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:37.629059 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:37.629147 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:37.629463 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:38.129234 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:38.129326 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:38.129664 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:38.629351 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:38.629432 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:38.629732 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:38.629805 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:39.129576 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:39.129650 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:39.129997 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:39.628740 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:39.628825 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:39.629123 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:40.128863 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:40.128946 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:40.129324 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:40.629061 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:40.629132 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:40.629464 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:41.129329 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:41.129415 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:41.129770 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:41.129836 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:41.629564 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:41.629638 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:41.629926 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:42.129712 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:42.129803 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:42.130147 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:42.628855 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:42.628932 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:42.629230 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:43.128970 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:43.129055 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:43.129407 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:43.629110 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:43.629193 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:43.629549 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:43.629613 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:44.129360 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:44.129442 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:44.129809 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:44.629604 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:44.629695 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:44.629982 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:45.128765 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:45.128844 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:45.129221 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:45.628969 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:45.629067 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:45.629365 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:46.129219 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:46.129334 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:46.129701 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:46.129778 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:46.629522 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:46.629594 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:46.629887 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:47.129668 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:47.129774 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:47.130135 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:47.628848 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:47.628924 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:47.629222 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:48.128974 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:48.129074 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:48.129460 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:48.629189 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:48.629275 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:48.629575 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:48.629637 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:49.129365 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:49.129460 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:49.129826 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:49.629589 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:49.629663 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:49.629948 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:50.128684 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:50.128784 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:50.129153 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:50.628866 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:50.628940 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:50.629236 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:51.128964 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:51.129053 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:51.129443 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:51.129520 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:51.629181 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:51.629285 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:51.629597 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:52.129363 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:52.129439 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:52.129782 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:52.629564 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:52.629637 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:52.629921 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:53.128676 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:53.128760 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:53.129117 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:53.628840 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:53.628925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:53.629216 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:53.629319 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:54.129011 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:54.129119 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:54.129458 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:54.629169 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:54.629255 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:54.629563 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:55.129370 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:55.129456 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:55.129803 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:55.629586 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:55.629656 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:55.629948 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:55.630021 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:56.129716 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:56.129807 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:56.130158 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:56.628872 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:56.628960 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:56.629280 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:57.129030 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:57.129134 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:57.129533 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:57.629322 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:57.629394 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:57.629681 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:58.129475 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:58.129571 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:58.129969 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:58.130041 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:58.629691 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:58.629768 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:58.630065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:59.128782 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:59.128877 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:59.129234 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:59.628979 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:59.629051 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:59.629387 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:00.129109 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:00.129205 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:00.129657 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:00.629456 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:00.629529 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:00.629872 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:00.629939 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:01.129658 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:01.129735 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:01.130048 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:01.628777 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:01.628856 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:01.629190 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:02.128935 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:02.129010 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:02.129319 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:02.628797 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:02.628877 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:02.629137 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:03.128821 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:03.128896 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:03.129167 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:03.129224 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:03.628891 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:03.628974 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:03.629299 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:04.129012 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:04.129096 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:04.129462 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:04.629177 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:04.629276 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:04.629597 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:05.129034 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:05.129129 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:05.129588 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:05.129664 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:05.629416 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:05.629491 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:05.629807 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:06.129708 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:06.129798 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:06.130177 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:06.628914 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:06.628986 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:06.629309 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:07.129052 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:07.129152 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:07.129545 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:07.629359 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:07.629447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:07.629774 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:07.629843 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:08.129619 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:08.129703 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:08.130076 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:08.628794 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:08.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:08.629209 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:09.128966 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:09.129044 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:09.129548 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:09.629398 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:09.629478 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:09.629790 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:10.129602 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:10.129686 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:10.130062 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:10.130134 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:10.628810 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:10.628888 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:10.629214 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:11.128747 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:11.128824 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:11.129152 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:11.628878 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:11.628954 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:11.629286 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:12.129028 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:12.129106 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:12.129473 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:12.629262 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:12.629338 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:12.629618 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:12.629689 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:13.129417 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:13.129501 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:13.129842 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:13.629621 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:13.629693 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:13.629988 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:14.128745 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:14.128832 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:14.129178 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:14.628945 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:14.629017 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:14.629397 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:15.129144 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:15.129234 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:15.129617 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:15.129699 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:15.629451 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:15.629537 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:15.629859 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:16.129648 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:16.129725 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:16.130080 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:16.628842 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:16.628922 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:16.629262 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:17.128979 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:17.129061 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:17.129404 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:17.629119 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:17.629192 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:17.629516 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:17.629592 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:18.129336 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:18.129414 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:18.129755 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:18.629486 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:18.629564 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:18.629881 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:19.129669 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:19.129760 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:19.130101 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:19.628816 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:19.628890 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:19.629175 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:20.128910 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:20.128984 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:20.129330 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:20.129401 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:20.629078 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:20.629168 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:20.629501 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:21.129330 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:21.129424 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:21.129762 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:21.629541 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:21.629617 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:21.629961 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:22.128702 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:22.128777 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:22.129131 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:22.628835 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:22.628922 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:22.629266 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:22.629330 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:23.128997 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:23.129087 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:23.129464 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:23.629182 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:23.629286 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:23.629610 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:24.129357 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:24.129433 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:24.129789 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:24.629580 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:24.629654 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:24.630004 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:24.630071 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:25.128772 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:25.128875 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:25.129222 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:25.628964 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:25.629038 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:25.629409 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:26.129166 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:26.129260 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:26.129614 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:26.629352 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:26.629430 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:26.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:27.129507 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:27.129584 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:27.129930 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:27.129995 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:27.628677 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:27.628763 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:27.629122 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:28.128831 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:28.128925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:28.129213 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:28.629034 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:28.629122 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:28.629430 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:29.129177 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:29.129276 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:29.129670 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:29.629478 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:29.629549 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:29.629842 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:29.629908 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:30.129649 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:30.129723 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:30.130078 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:30.628813 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:30.628886 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:30.629190 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:31.128911 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:31.128986 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:31.129333 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:31.629040 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:31.629132 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:31.629470 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:32.129197 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:32.129290 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:32.129685 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:32.129763 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:32.629496 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:32.629568 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:32.629869 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:33.129687 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:33.129771 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:33.130108 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:33.628818 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:33.628897 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:33.629202 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:34.128946 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:34.129020 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:34.129415 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:34.629147 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:34.629219 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:34.629558 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:34.629628 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:35.129369 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:35.129455 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:35.129805 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:35.629601 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:35.629676 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:35.629982 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:36.128679 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:36.128768 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:36.129121 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:36.628838 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:36.628914 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:36.629211 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:37.128955 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:37.129054 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:37.129433 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:37.129502 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:37.629160 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:37.629260 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:37.629562 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:38.129342 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:38.129438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:38.129787 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:38.629253 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:38.629328 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:38.629641 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:39.129419 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:39.129511 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:39.129853 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:39.129927 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:39.629656 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:39.629726 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:39.630015 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:40.128736 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:40.128824 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:40.129162 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:40.628753 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:40.628832 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:40.629116 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:41.128932 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:41.129010 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:41.129303 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:41.629089 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:41.629196 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:41.629513 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:41.629580 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:42.129349 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:42.129434 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:42.129769 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:42.629554 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:42.629629 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:42.629873 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:43.129642 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:43.129720 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:43.130046 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:43.628744 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:43.628817 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:43.629115 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:44.128831 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:44.128907 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:44.129297 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:44.129364 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:44.629025 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:44.629100 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:44.629418 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:45.129142 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:45.129218 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:45.129572 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:45.629352 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:45.629425 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:45.629726 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:46.129360 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:46.129445 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:46.129788 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:46.129856 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:46.629588 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:46.629667 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:46.629948 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:47.128666 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:47.128744 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:47.129078 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:47.628771 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:47.628847 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:47.629196 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:48.128923 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:48.129000 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:48.129363 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:48.629072 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:48.629151 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:48.629471 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:48.629534 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:49.129296 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:49.129375 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:49.129725 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:49.629524 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:49.629595 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:49.629882 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:50.129670 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:50.129763 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:50.130141 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:50.628871 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:50.628953 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:50.629283 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:51.129015 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:51.129090 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:51.129476 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:51.129545 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:51.629293 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:51.629378 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:51.629669 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:52.129450 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:52.129528 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:52.129859 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:52.629654 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:52.629726 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:52.630058 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:53.128778 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:53.128856 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:53.129197 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:53.628936 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:53.629015 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:53.629344 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:53.629420 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:54.129104 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:54.129196 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:54.129579 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:54.629357 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:54.629426 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:54.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:55.129436 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:55.129536 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:55.129882 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:55.629646 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:55.629719 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:55.630035 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:55.630107 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:56.128773 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:56.128845 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:56.129181 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:56.628950 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:56.629034 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:56.629378 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:57.129105 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:57.129181 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:57.129559 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:57.629369 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:57.629438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:57.629742 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:58.129515 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:58.129595 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:58.129950 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:58.130034 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:58.628750 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:58.628830 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:58.629147 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:59.128851 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:59.128928 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:59.129309 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:59.629042 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:59.629121 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:59.629455 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:00.129167 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:00.129270 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:00.129632 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:00.629423 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:00.629498 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:00.629793 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:00.629863 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:01.129591 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:01.129676 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:01.130023 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:01.628726 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:01.628804 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:01.629104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:02.128841 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:02.128936 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:02.129299 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:02.629029 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:02.629126 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:02.629455 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:03.129199 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:03.129305 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:03.129646 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:03.129706 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:03.629451 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:03.629523 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:03.629841 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:04.129677 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:04.129766 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:04.130114 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:04.628842 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:04.628925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:04.629305 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:05.129074 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:05.129179 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:05.129561 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:05.629356 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:05.629434 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:05.629760 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:05.629824 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:06.129613 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:06.129693 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:06.130038 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:06.628772 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:06.628866 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:06.629198 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:07.128967 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:07.129056 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:07.129446 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:07.629172 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:07.629271 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:07.629622 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:08.129431 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:08.129524 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:08.129883 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:08.129948 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:08.629670 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:08.629754 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:08.630071 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:09.128820 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:09.128899 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:09.129287 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:09.629017 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:09.629101 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:09.629445 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:10.129193 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:10.129297 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:10.129649 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:10.629427 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:10.629501 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:10.629814 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:10.629890 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:11.129612 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:11.129692 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:11.129995 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:11.628703 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:11.628780 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:11.629047 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:12.128784 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:12.128867 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:12.129223 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:12.628955 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:12.629067 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:12.629416 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:13.129129 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:13.129206 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:13.129596 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:13.129670 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:13.629350 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:13.629433 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:13.629735 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:14.129533 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:14.129618 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:14.129952 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:14.628687 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:14.628782 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:14.629096 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:15.128811 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:15.128888 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:15.129232 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:15.628958 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:15.629043 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:15.629372 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:15.629444 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:16.129169 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:16.129269 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:16.129671 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:16.629474 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:16.629546 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:16.629863 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:17.129648 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:17.129733 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:17.130077 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:17.628801 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:17.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:17.629169 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:18.128883 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:18.128963 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:18.129324 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:18.129398 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:18.629048 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:18.629135 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:18.629454 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:19.129179 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:19.129268 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:19.129621 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:19.629351 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:19.629424 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:19.629708 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:20.129508 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:20.129585 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:20.129925 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:20.129994 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:20.628667 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:20.628737 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:20.629038 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:21.128739 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:21.128822 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:21.129169 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:21.628882 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:21.628954 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:21.629266 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:22.128994 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:22.129070 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:22.129426 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:22.629135 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:22.629221 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:22.629538 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:22.629601 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:23.129384 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:23.129466 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:23.129808 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:23.629595 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:23.629669 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:23.629984 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:24.128733 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:24.128814 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:24.129170 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:24.629511 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:24.629630 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:24.630004 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:24.630069 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:25.128773 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:25.128859 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:25.129232 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:25.629077 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:25.629159 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:25.629492 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:26.129299 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:26.129377 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:26.129704 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:26.629492 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:26.629562 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:26.629872 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:27.129668 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:27.129753 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:27.130132 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:27.130203 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:27.628888 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:27.628961 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:27.629299 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:28.129030 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:28.129106 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:28.129492 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:28.629210 1653676 node_ready.go:38] duration metric: took 6m0.000644351s for node "functional-699837" to be "Ready" ...
	I0804 09:01:28.630996 1653676 out.go:201] 
	W0804 09:01:28.631963 1653676 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W0804 09:01:28.631975 1653676 out.go:270] * 
	W0804 09:01:28.633557 1653676 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 09:01:28.634655 1653676 out.go:201] 
	
	
	==> Docker <==
	Aug 04 08:55:25 functional-699837 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Aug 04 08:55:25 functional-699837 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Aug 04 08:55:25 functional-699837 systemd[1]: cri-docker.service: Deactivated successfully.
	Aug 04 08:55:25 functional-699837 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Aug 04 08:55:25 functional-699837 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Start docker client with request timeout 0s"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Loaded network plugin cni"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Docker cri networking managed by network plugin cni"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Setting cgroupDriver cgroupfs"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Start cri-dockerd grpc backend"
	Aug 04 08:55:25 functional-699837 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Aug 04 08:55:26 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a670d9d90ef4b3f9c8a2229b07375783d2742e14cb8b08de1d1d609352b31ca9/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 08:55:26 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6196286ba923f262b934ea01e1a6c54ba05e38908d2ce0251696c08a8b6e4e4f/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 08:55:26 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/87c98d51b11aa2b27ab051d1a1e76c991403967dc4bbed5c8865a1c8839a006c/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 08:55:26 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4dc39892c792c69f93a9689deb4a22058aa932aaab9b5a2ef60fe93066740a6a/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 08:56:16 functional-699837 dockerd[7186]: time="2025-08-04T08:56:16.274092329Z" level=info msg="ignoring event" container=6a82f093dfdcc77dca8bafe4751718938b424c4cd13715b8c25f8c91d4094c87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 08:56:25 functional-699837 dockerd[7186]: time="2025-08-04T08:56:25.952124711Z" level=info msg="ignoring event" container=d11d953e110f7fac9239023c8f301d3ea182fcc19934837d8f119e7d945ae14a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 08:56:55 functional-699837 dockerd[7186]: time="2025-08-04T08:56:55.721506604Z" level=info msg="ignoring event" container=340fbe431c80ae67951d4d3de5dbda3a7af1fd7b79b5e3706e0b82c0e360bf2b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 08:59:24 functional-699837 dockerd[7186]: time="2025-08-04T08:59:24.457189004Z" level=info msg="ignoring event" container=a70a68ec61693decabdce1681f5a849ba6740bf7abf9db4339c54ccb1b99a359 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 08:59:32 functional-699837 dockerd[7186]: time="2025-08-04T08:59:32.204638673Z" level=info msg="ignoring event" container=2fafac7520c8d0e9a9ddb8e73ffb49294146ab4a5f8bce024822ab9f4fdcd5bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2fafac7520c8d       9ad783615e1bc       2 minutes ago       Exited              kube-controller-manager   6                   87c98d51b11aa       kube-controller-manager-functional-699837
	a70a68ec61693       d85eea91cc41d       2 minutes ago       Exited              kube-apiserver            6                   6196286ba923f       kube-apiserver-functional-699837
	340fbe431c80a       1e30c0b1e9b99       4 minutes ago       Exited              etcd                      6                   a670d9d90ef4b       etcd-functional-699837
	3206d43d6e58f       21d34a2aeacf5       5 minutes ago       Running             kube-scheduler            2                   4dc39892c792c       kube-scheduler-functional-699837
	0cb03d71b984f       21d34a2aeacf5       6 minutes ago       Exited              kube-scheduler            1                   cdae8372eae9d       kube-scheduler-functional-699837
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:01:29.900723    9326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:01:29.901201    9326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:01:29.902720    9326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:01:29.903135    9326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:01:29.904762    9326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000488] IPv4: martian source 10.244.0.33 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[  +0.000590] IPv4: martian source 10.244.0.33 from 10.244.0.7, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ee 17 d6 72 58 d4 08 06
	[ +20.425373] IPv4: martian source 10.244.0.36 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 2e 04 ae c5 a3 08 06
	[  +0.708699] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[Aug 4 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 4d a6 d6 4c 9f 08 06
	[Aug 4 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 38 7f 58 31 63 08 06
	[ +30.193533] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 b7 61 9c 47 84 08 06
	[Aug 4 08:45] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a d0 26 e8 7c d1 08 06
	[Aug 4 08:46] FS-Cache: Duplicate cookie detected
	[  +0.004807] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006832] FS-Cache: O-cookie d=000000003739c6e4{9P.session} n=000000001b482ea5
	[  +0.007607] FS-Cache: O-key=[10] '34333332323039333239'
	[  +0.005436] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006682] FS-Cache: N-cookie d=000000003739c6e4{9P.session} n=00000000e0b3994b
	[  +0.007609] FS-Cache: N-key=[10] '34333332323039333239'
	[  +5.882110] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 55 4a ac 47 cd 08 06
	
	
	==> etcd [340fbe431c80] <==
	flag provided but not defined: -proxy-refresh-interval
	Usage:
	
	  etcd [flags]
	    Start an etcd server.
	
	  etcd --version
	    Show the version of etcd.
	
	  etcd -h | --help
	    Show the help information about etcd.
	
	  etcd --config-file
	    Path to the server configuration file. Note that if a configuration file is provided, other command line flags and environment variables will be ignored.
	
	  etcd gateway
	    Run the stateless pass-through etcd TCP connection forwarding proxy.
	
	  etcd grpc-proxy
	    Run the stateless etcd v3 gRPC L7 reverse proxy.
	
	
	
	==> kernel <==
	 09:01:29 up 1 day, 17:42,  0 users,  load average: 0.01, 0.05, 0.34
	Linux functional-699837 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [a70a68ec6169] <==
	W0804 08:59:04.426148       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:04.426280       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 08:59:04.427463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 08:59:04.434192       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 08:59:04.440592       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 08:59:04.440613       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 08:59:04.440846       1 instance.go:232] Using reconciler: lease
	W0804 08:59:04.441668       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:04.441684       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:05.427410       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:05.427410       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:05.441981       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:07.008411       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:07.025679       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:07.166787       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:09.765027       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:09.806488       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:10.063522       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:13.932343       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:14.037582       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:14.089064       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:19.259004       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:19.470708       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:20.945736       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 08:59:24.442401       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [2fafac7520c8] <==
	I0804 08:59:11.887703       1 serving.go:386] Generated self-signed cert in-memory
	I0804 08:59:12.166874       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 08:59:12.166898       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 08:59:12.168293       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 08:59:12.168315       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 08:59:12.168600       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 08:59:12.168727       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 08:59:32.171192       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-scheduler [0cb03d71b984] <==
	
	
	==> kube-scheduler [3206d43d6e58] <==
	E0804 09:00:11.384701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:00:12.260216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 09:00:13.558952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:00:13.721571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:00:16.379946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 09:00:23.348524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:00:28.563885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:00:32.014424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:00:33.033677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 09:00:47.281529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:00:47.653383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 09:00:48.988484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 09:00:54.836226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 09:00:54.975251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:00:57.394600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:00:59.500812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:01:00.013055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 09:01:00.539902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:01:01.692270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 09:01:02.088398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:01:08.204402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 09:01:09.352314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:01:11.128294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:01:23.683836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:01:24.236788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	
	
	==> kubelet <==
	Aug 04 09:01:16 functional-699837 kubelet[4226]: I0804 09:01:16.478571    4226 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:01:16 functional-699837 kubelet[4226]: E0804 09:01:16.479008    4226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:01:17 functional-699837 kubelet[4226]: E0804 09:01:17.465282    4226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:01:17 functional-699837 kubelet[4226]: E0804 09:01:17.598918    4226 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:01:17 functional-699837 kubelet[4226]: I0804 09:01:17.598996    4226 scope.go:117] "RemoveContainer" containerID="2fafac7520c8d0e9a9ddb8e73ffb49294146ab4a5f8bce024822ab9f4fdcd5bd"
	Aug 04 09:01:17 functional-699837 kubelet[4226]: E0804 09:01:17.599130    4226 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-699837_kube-system(ed0b2fd0bf6ad62500e8494ab79d1a1a)\"" pod="kube-system/kube-controller-manager-functional-699837" podUID="ed0b2fd0bf6ad62500e8494ab79d1a1a"
	Aug 04 09:01:18 functional-699837 kubelet[4226]: E0804 09:01:18.142560    4226 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.185884435643239b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-699837 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 08:51:19.605724059 +0000 UTC m=+0.317011778,LastTimestamp:2025-08-04 08:51:19.605724059 +0000 UTC m=+0.317011778,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:01:18 functional-699837 kubelet[4226]: E0804 09:01:18.142667    4226 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{functional-699837.185884435643239b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-699837 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 08:51:19.605724059 +0000 UTC m=+0.317011778,LastTimestamp:2025-08-04 08:51:19.605724059 +0000 UTC m=+0.317011778,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:01:18 functional-699837 kubelet[4226]: E0804 09:01:18.142986    4226 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588443569dee4d  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 08:51:19.611674189 +0000 UTC m=+0.322961923,LastTimestamp:2025-08-04 08:51:19.611674189 +0000 UTC m=+0.322961923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:01:19 functional-699837 kubelet[4226]: E0804 09:01:19.656720    4226 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	Aug 04 09:01:21 functional-699837 kubelet[4226]: E0804 09:01:21.599078    4226 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:01:21 functional-699837 kubelet[4226]: I0804 09:01:21.599164    4226 scope.go:117] "RemoveContainer" containerID="340fbe431c80ae67951d4d3de5dbda3a7af1fd7b79b5e3706e0b82c0e360bf2b"
	Aug 04 09:01:21 functional-699837 kubelet[4226]: E0804 09:01:21.599350    4226 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=etcd pod=etcd-functional-699837_kube-system(33b890b5c0b95f8eaa124c566a17ff33)\"" pod="kube-system/etcd-functional-699837" podUID="33b890b5c0b95f8eaa124c566a17ff33"
	Aug 04 09:01:21 functional-699837 kubelet[4226]: E0804 09:01:21.602204    4226 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Aug 04 09:01:22 functional-699837 kubelet[4226]: E0804 09:01:22.598787    4226 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:01:22 functional-699837 kubelet[4226]: I0804 09:01:22.598874    4226 scope.go:117] "RemoveContainer" containerID="a70a68ec61693decabdce1681f5a849ba6740bf7abf9db4339c54ccb1b99a359"
	Aug 04 09:01:22 functional-699837 kubelet[4226]: E0804 09:01:22.599029    4226 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-functional-699837_kube-system(2b39e4280fdde7528fa65c33493b517b)\"" pod="kube-system/kube-apiserver-functional-699837" podUID="2b39e4280fdde7528fa65c33493b517b"
	Aug 04 09:01:23 functional-699837 kubelet[4226]: I0804 09:01:23.480767    4226 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:01:23 functional-699837 kubelet[4226]: E0804 09:01:23.481137    4226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:01:24 functional-699837 kubelet[4226]: E0804 09:01:24.396607    4226 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588443569dee4d  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 08:51:19.611674189 +0000 UTC m=+0.322961923,LastTimestamp:2025-08-04 08:51:19.611674189 +0000 UTC m=+0.322961923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:01:24 functional-699837 kubelet[4226]: E0804 09:01:24.466107    4226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:01:27 functional-699837 kubelet[4226]: E0804 09:01:27.706024    4226 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Aug 04 09:01:27 functional-699837 kubelet[4226]: E0804 09:01:27.936556    4226 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Aug 04 09:01:28 functional-699837 kubelet[4226]: E0804 09:01:28.598604    4226 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:01:29 functional-699837 kubelet[4226]: E0804 09:01:29.657833    4226 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837: exit status 2 (279.996159ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-699837" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/SoftStart (369.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/KubectlGetPods (1.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-699837 get po -A
functional_test.go:713: (dbg) Non-zero exit: kubectl --context functional-699837 get po -A: exit status 1 (44.415906ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:715: failed to get kubectl pods: args "kubectl --context functional-699837 get po -A" : exit status 1
functional_test.go:719: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-699837 get po -A"
functional_test.go:722: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-699837 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-699837
helpers_test.go:235: (dbg) docker inspect functional-699837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	        "Created": "2025-08-04T08:46:45.45274172Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1645232,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T08:46:45.480784715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hosts",
	        "LogPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef-json.log",
	        "Name": "/functional-699837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-699837:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-699837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	                "LowerDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/merged",
	                "UpperDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/diff",
	                "WorkDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-699837",
	                "Source": "/var/lib/docker/volumes/functional-699837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-699837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-699837",
	                "name.minikube.sigs.k8s.io": "functional-699837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "28a81d3856c88da8c1d30d5c1cccd74ba2a899c3397b78caf0ac9da484142038",
	            "SandboxKey": "/var/run/docker/netns/28a81d3856c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-699837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:c5:9a:18:f2:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "763070d9e7bba0803db69bf71eb608d56921d0bfd4c71a1d39d0701f7372b87c",
	                    "EndpointID": "83493e8c17b59326d8c479c2c0d7a5ded2cae3362a881c1ce8347b3f751ead15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-699837",
	                        "c369b96e23d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837: exit status 2 (262.725497ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 logs -n 25
helpers_test.go:252: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-114794 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ ssh            │ functional-114794 ssh -- ls -la /mount-9p                                                                                                           │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ ssh            │ functional-114794 ssh sudo umount -f /mount-9p                                                                                                      │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ mount          │ -p functional-114794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2057398278/001:/mount3 --alsologtostderr -v=1                                  │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ mount          │ -p functional-114794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2057398278/001:/mount1 --alsologtostderr -v=1                                  │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ mount          │ -p functional-114794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2057398278/001:/mount2 --alsologtostderr -v=1                                  │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ ssh            │ functional-114794 ssh findmnt -T /mount1                                                                                                            │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ ssh            │ functional-114794 ssh findmnt -T /mount1                                                                                                            │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ ssh            │ functional-114794 ssh findmnt -T /mount2                                                                                                            │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ ssh            │ functional-114794 ssh findmnt -T /mount3                                                                                                            │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ mount          │ -p functional-114794 --kill=true                                                                                                                    │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ start          │ -p functional-114794 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker                                         │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ update-context │ functional-114794 update-context --alsologtostderr -v=2                                                                                             │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ update-context │ functional-114794 update-context --alsologtostderr -v=2                                                                                             │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ update-context │ functional-114794 update-context --alsologtostderr -v=2                                                                                             │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image          │ functional-114794 image ls --format short --alsologtostderr                                                                                         │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image          │ functional-114794 image ls --format yaml --alsologtostderr                                                                                          │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ ssh            │ functional-114794 ssh pgrep buildkitd                                                                                                               │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ image          │ functional-114794 image build -t localhost/my-image:functional-114794 testdata/build --alsologtostderr                                              │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image          │ functional-114794 image ls --format json --alsologtostderr                                                                                          │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image          │ functional-114794 image ls --format table --alsologtostderr                                                                                         │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image          │ functional-114794 image ls                                                                                                                          │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ delete         │ -p functional-114794                                                                                                                                │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ start          │ -p functional-699837 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ start          │ -p functional-699837 --alsologtostderr -v=8                                                                                                         │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 08:55 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 08:55:20
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 08:55:20.770600 1653676 out.go:345] Setting OutFile to fd 1 ...
	I0804 08:55:20.770872 1653676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:55:20.770883 1653676 out.go:358] Setting ErrFile to fd 2...
	I0804 08:55:20.770890 1653676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:55:20.771067 1653676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 08:55:20.771644 1653676 out.go:352] Setting JSON to false
	I0804 08:55:20.772653 1653676 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":149810,"bootTime":1754147911,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 08:55:20.772739 1653676 start.go:140] virtualization: kvm guest
	I0804 08:55:20.774597 1653676 out.go:177] * [functional-699837] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 08:55:20.775675 1653676 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 08:55:20.775678 1653676 notify.go:220] Checking for updates...
	I0804 08:55:20.776705 1653676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 08:55:20.777818 1653676 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:20.778845 1653676 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 08:55:20.779811 1653676 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 08:55:20.780885 1653676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 08:55:20.782127 1653676 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 08:55:20.782240 1653676 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 08:55:20.804704 1653676 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 08:55:20.804841 1653676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 08:55:20.850605 1653676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 08:55:20.841828701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 08:55:20.850698 1653676 docker.go:318] overlay module found
	I0804 08:55:20.852305 1653676 out.go:177] * Using the docker driver based on existing profile
	I0804 08:55:20.853166 1653676 start.go:304] selected driver: docker
	I0804 08:55:20.853179 1653676 start.go:918] validating driver "docker" against &{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 08:55:20.853275 1653676 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 08:55:20.853364 1653676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 08:55:20.899900 1653676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 08:55:20.891412564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 08:55:20.900590 1653676 cni.go:84] Creating CNI manager for ""
	I0804 08:55:20.900687 1653676 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 08:55:20.900743 1653676 start.go:348] cluster config:
	{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 08:55:20.902216 1653676 out.go:177] * Starting "functional-699837" primary control-plane node in "functional-699837" cluster
	I0804 08:55:20.903155 1653676 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 08:55:20.904009 1653676 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 08:55:20.904940 1653676 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 08:55:20.904978 1653676 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 08:55:20.904991 1653676 cache.go:56] Caching tarball of preloaded images
	I0804 08:55:20.905036 1653676 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 08:55:20.905069 1653676 preload.go:172] Found /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 08:55:20.905079 1653676 cache.go:59] Finished verifying existence of preloaded tar for v1.34.0-beta.0 on docker
	I0804 08:55:20.905203 1653676 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/config.json ...
	I0804 08:55:20.923511 1653676 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 08:55:20.923529 1653676 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 08:55:20.923544 1653676 cache.go:230] Successfully downloaded all kic artifacts
	I0804 08:55:20.923577 1653676 start.go:360] acquireMachinesLock for functional-699837: {Name:mkeddb8e244284f14cfc07327f464823de65cf67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 08:55:20.923631 1653676 start.go:364] duration metric: took 36.633µs to acquireMachinesLock for "functional-699837"
	I0804 08:55:20.923647 1653676 start.go:96] Skipping create...Using existing machine configuration
	I0804 08:55:20.923652 1653676 fix.go:54] fixHost starting: 
	I0804 08:55:20.923842 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:20.940410 1653676 fix.go:112] recreateIfNeeded on functional-699837: state=Running err=<nil>
	W0804 08:55:20.940440 1653676 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 08:55:20.942107 1653676 out.go:177] * Updating the running docker "functional-699837" container ...
	I0804 08:55:20.943161 1653676 machine.go:93] provisionDockerMachine start ...
	I0804 08:55:20.943249 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:20.959620 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:20.959871 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:20.959884 1653676 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 08:55:21.080396 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-699837
	
	I0804 08:55:21.080433 1653676 ubuntu.go:169] provisioning hostname "functional-699837"
	I0804 08:55:21.080500 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.097426 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.097649 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.097666 1653676 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-699837 && echo "functional-699837" | sudo tee /etc/hostname
	I0804 08:55:21.227825 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-699837
	
	I0804 08:55:21.227926 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.246066 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.246278 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.246294 1653676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-699837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-699837/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-699837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 08:55:21.373154 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 08:55:21.373185 1653676 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 08:55:21.373228 1653676 ubuntu.go:177] setting up certificates
	I0804 08:55:21.373273 1653676 provision.go:84] configureAuth start
	I0804 08:55:21.373335 1653676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-699837
	I0804 08:55:21.390471 1653676 provision.go:143] copyHostCerts
	I0804 08:55:21.390507 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 08:55:21.390548 1653676 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 08:55:21.390558 1653676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 08:55:21.390632 1653676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 08:55:21.390734 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 08:55:21.390760 1653676 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 08:55:21.390767 1653676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 08:55:21.390803 1653676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 08:55:21.390876 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 08:55:21.390902 1653676 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 08:55:21.390914 1653676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 08:55:21.390947 1653676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 08:55:21.391030 1653676 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.functional-699837 san=[127.0.0.1 192.168.49.2 functional-699837 localhost minikube]
	I0804 08:55:21.573518 1653676 provision.go:177] copyRemoteCerts
	I0804 08:55:21.573582 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 08:55:21.573618 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.591269 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:21.681513 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 08:55:21.681585 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 08:55:21.702708 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 08:55:21.702758 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 08:55:21.723583 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 08:55:21.723630 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 08:55:21.744569 1653676 provision.go:87] duration metric: took 371.27679ms to configureAuth
	I0804 08:55:21.744602 1653676 ubuntu.go:193] setting minikube options for container-runtime
	I0804 08:55:21.744799 1653676 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 08:55:21.744861 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.762017 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.762244 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.762255 1653676 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 08:55:21.889470 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 08:55:21.889494 1653676 ubuntu.go:71] root file system type: overlay
	I0804 08:55:21.889614 1653676 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 08:55:21.889686 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.906485 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.906734 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.906827 1653676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 08:55:22.043972 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 08:55:22.044042 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.061528 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:22.061801 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:22.061820 1653676 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 08:55:22.189999 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 08:55:22.190024 1653676 machine.go:96] duration metric: took 1.246850112s to provisionDockerMachine
	I0804 08:55:22.190035 1653676 start.go:293] postStartSetup for "functional-699837" (driver="docker")
	I0804 08:55:22.190046 1653676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 08:55:22.190105 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 08:55:22.190157 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.207121 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.297799 1653676 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 08:55:22.300559 1653676 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.5 LTS"
	I0804 08:55:22.300580 1653676 command_runner.go:130] > NAME="Ubuntu"
	I0804 08:55:22.300588 1653676 command_runner.go:130] > VERSION_ID="22.04"
	I0804 08:55:22.300596 1653676 command_runner.go:130] > VERSION="22.04.5 LTS (Jammy Jellyfish)"
	I0804 08:55:22.300602 1653676 command_runner.go:130] > VERSION_CODENAME=jammy
	I0804 08:55:22.300608 1653676 command_runner.go:130] > ID=ubuntu
	I0804 08:55:22.300614 1653676 command_runner.go:130] > ID_LIKE=debian
	I0804 08:55:22.300622 1653676 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0804 08:55:22.300634 1653676 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0804 08:55:22.300652 1653676 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0804 08:55:22.300662 1653676 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0804 08:55:22.300667 1653676 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0804 08:55:22.300719 1653676 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 08:55:22.300753 1653676 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 08:55:22.300768 1653676 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 08:55:22.300780 1653676 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 08:55:22.300795 1653676 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 08:55:22.300857 1653676 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 08:55:22.300964 1653676 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 08:55:22.300977 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> /etc/ssl/certs/15826902.pem
	I0804 08:55:22.301064 1653676 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts -> hosts in /etc/test/nested/copy/1582690
	I0804 08:55:22.301073 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts -> /etc/test/nested/copy/1582690/hosts
	I0804 08:55:22.301115 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1582690
	I0804 08:55:22.308734 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 08:55:22.329778 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts --> /etc/test/nested/copy/1582690/hosts (40 bytes)
	I0804 08:55:22.350435 1653676 start.go:296] duration metric: took 160.385758ms for postStartSetup
	I0804 08:55:22.350534 1653676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 08:55:22.350588 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.367129 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.453443 1653676 command_runner.go:130] > 33%
	I0804 08:55:22.453718 1653676 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 08:55:22.457863 1653676 command_runner.go:130] > 197G
	I0804 08:55:22.457888 1653676 fix.go:56] duration metric: took 1.534232726s for fixHost
	I0804 08:55:22.457898 1653676 start.go:83] releasing machines lock for "functional-699837", held for 1.534258328s
	I0804 08:55:22.457964 1653676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-699837
	I0804 08:55:22.474710 1653676 ssh_runner.go:195] Run: cat /version.json
	I0804 08:55:22.474768 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.474834 1653676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 08:55:22.474905 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.492489 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.492983 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.576302 1653676 command_runner.go:130] > {"iso_version": "v1.36.0-1753487480-21147", "kicbase_version": "v0.0.47-1753871403-21198", "minikube_version": "v1.36.0", "commit": "69470231e9abd2d11a84a83b271e426458d5d12f"}
	I0804 08:55:22.576422 1653676 ssh_runner.go:195] Run: systemctl --version
	I0804 08:55:22.653754 1653676 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0804 08:55:22.655827 1653676 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.16)
	I0804 08:55:22.655870 1653676 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0804 08:55:22.655949 1653676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 08:55:22.659872 1653676 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0804 08:55:22.659895 1653676 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I0804 08:55:22.659905 1653676 command_runner.go:130] > Device: 37h/55d	Inode: 822247      Links: 1
	I0804 08:55:22.659914 1653676 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0804 08:55:22.659929 1653676 command_runner.go:130] > Access: 2025-08-04 08:46:48.521872821 +0000
	I0804 08:55:22.659937 1653676 command_runner.go:130] > Modify: 2025-08-04 08:46:48.497871149 +0000
	I0804 08:55:22.659947 1653676 command_runner.go:130] > Change: 2025-08-04 08:46:48.497871149 +0000
	I0804 08:55:22.659959 1653676 command_runner.go:130] >  Birth: 2025-08-04 08:46:48.497871149 +0000
	I0804 08:55:22.660164 1653676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 08:55:22.676431 1653676 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 08:55:22.676489 1653676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 08:55:22.683904 1653676 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 08:55:22.683925 1653676 start.go:495] detecting cgroup driver to use...
	I0804 08:55:22.683957 1653676 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 08:55:22.684079 1653676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 08:55:22.696848 1653676 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0804 08:55:22.698010 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:23.084233 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 08:55:23.094208 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 08:55:23.103030 1653676 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 08:55:23.103076 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 08:55:23.111645 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 08:55:23.120216 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 08:55:23.128524 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 08:55:23.137020 1653676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 08:55:23.144932 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 08:55:23.153318 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 08:55:23.161730 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 08:55:23.170124 1653676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 08:55:23.176419 1653676 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0804 08:55:23.177058 1653676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 08:55:23.184211 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:23.265466 1653676 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 08:55:23.467281 1653676 start.go:495] detecting cgroup driver to use...
	I0804 08:55:23.467337 1653676 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 08:55:23.467388 1653676 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 08:55:23.477772 1653676 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0804 08:55:23.477865 1653676 command_runner.go:130] > [Unit]
	I0804 08:55:23.477892 1653676 command_runner.go:130] > Description=Docker Application Container Engine
	I0804 08:55:23.477904 1653676 command_runner.go:130] > Documentation=https://docs.docker.com
	I0804 08:55:23.477912 1653676 command_runner.go:130] > BindsTo=containerd.service
	I0804 08:55:23.477924 1653676 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0804 08:55:23.477935 1653676 command_runner.go:130] > Wants=network-online.target
	I0804 08:55:23.477942 1653676 command_runner.go:130] > Requires=docker.socket
	I0804 08:55:23.477950 1653676 command_runner.go:130] > StartLimitBurst=3
	I0804 08:55:23.477958 1653676 command_runner.go:130] > StartLimitIntervalSec=60
	I0804 08:55:23.477963 1653676 command_runner.go:130] > [Service]
	I0804 08:55:23.477971 1653676 command_runner.go:130] > Type=notify
	I0804 08:55:23.477977 1653676 command_runner.go:130] > Restart=on-failure
	I0804 08:55:23.477992 1653676 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0804 08:55:23.478010 1653676 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0804 08:55:23.478023 1653676 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0804 08:55:23.478048 1653676 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0804 08:55:23.478062 1653676 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0804 08:55:23.478073 1653676 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0804 08:55:23.478088 1653676 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0804 08:55:23.478104 1653676 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0804 08:55:23.478125 1653676 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0804 08:55:23.478140 1653676 command_runner.go:130] > ExecStart=
	I0804 08:55:23.478162 1653676 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0804 08:55:23.478451 1653676 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0804 08:55:23.478489 1653676 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0804 08:55:23.478505 1653676 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0804 08:55:23.478520 1653676 command_runner.go:130] > LimitNOFILE=infinity
	I0804 08:55:23.478529 1653676 command_runner.go:130] > LimitNPROC=infinity
	I0804 08:55:23.478536 1653676 command_runner.go:130] > LimitCORE=infinity
	I0804 08:55:23.478544 1653676 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0804 08:55:23.478559 1653676 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0804 08:55:23.478570 1653676 command_runner.go:130] > TasksMax=infinity
	I0804 08:55:23.478576 1653676 command_runner.go:130] > TimeoutStartSec=0
	I0804 08:55:23.478586 1653676 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0804 08:55:23.478592 1653676 command_runner.go:130] > Delegate=yes
	I0804 08:55:23.478606 1653676 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0804 08:55:23.478612 1653676 command_runner.go:130] > KillMode=process
	I0804 08:55:23.478659 1653676 command_runner.go:130] > [Install]
	I0804 08:55:23.478680 1653676 command_runner.go:130] > WantedBy=multi-user.target
	I0804 08:55:23.480586 1653676 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 08:55:23.480654 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 08:55:23.491375 1653676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 08:55:23.505761 1653676 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0804 08:55:23.506806 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:23.923432 1653676 ssh_runner.go:195] Run: which cri-dockerd
	I0804 08:55:23.926961 1653676 command_runner.go:130] > /usr/bin/cri-dockerd
	I0804 08:55:23.927156 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 08:55:23.935149 1653676 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 08:55:23.950832 1653676 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 08:55:24.042992 1653676 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 08:55:24.297851 1653676 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 08:55:24.297998 1653676 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 08:55:24.377001 1653676 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 08:55:24.388783 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:24.510366 1653676 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 08:55:24.982429 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 08:55:24.992600 1653676 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0804 08:55:25.006985 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 08:55:25.016432 1653676 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 08:55:25.099651 1653676 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 08:55:25.175485 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:25.251241 1653676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 08:55:25.263161 1653676 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 08:55:25.272497 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:25.348098 1653676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 08:55:25.408736 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 08:55:25.419584 1653676 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 08:55:25.419655 1653676 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 08:55:25.422672 1653676 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0804 08:55:25.422693 1653676 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0804 08:55:25.422702 1653676 command_runner.go:130] > Device: 45h/69d	Inode: 1258        Links: 1
	I0804 08:55:25.422711 1653676 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0804 08:55:25.422722 1653676 command_runner.go:130] > Access: 2025-08-04 08:55:25.353889433 +0000
	I0804 08:55:25.422730 1653676 command_runner.go:130] > Modify: 2025-08-04 08:55:25.353889433 +0000
	I0804 08:55:25.422743 1653676 command_runner.go:130] > Change: 2025-08-04 08:55:25.357889711 +0000
	I0804 08:55:25.422749 1653676 command_runner.go:130] >  Birth: -
	I0804 08:55:25.422776 1653676 start.go:563] Will wait 60s for crictl version
	I0804 08:55:25.422814 1653676 ssh_runner.go:195] Run: which crictl
	I0804 08:55:25.425611 1653676 command_runner.go:130] > /usr/bin/crictl
	I0804 08:55:25.425730 1653676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 08:55:25.455697 1653676 command_runner.go:130] > Version:  0.1.0
	I0804 08:55:25.455721 1653676 command_runner.go:130] > RuntimeName:  docker
	I0804 08:55:25.455727 1653676 command_runner.go:130] > RuntimeVersion:  28.3.3
	I0804 08:55:25.455733 1653676 command_runner.go:130] > RuntimeApiVersion:  v1
	I0804 08:55:25.458002 1653676 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 08:55:25.458069 1653676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 08:55:25.480067 1653676 command_runner.go:130] > 28.3.3
	I0804 08:55:25.481564 1653676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 08:55:25.502625 1653676 command_runner.go:130] > 28.3.3
	I0804 08:55:25.506722 1653676 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 08:55:25.506807 1653676 cli_runner.go:164] Run: docker network inspect functional-699837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 08:55:25.523376 1653676 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0804 08:55:25.526929 1653676 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I0804 08:55:25.527043 1653676 kubeadm.go:875] updating cluster {Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 08:55:25.527223 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:25.922076 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:26.309911 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:26.726305 1653676 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 08:55:26.726461 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:27.101061 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:27.477147 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:27.859614 1653676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 08:55:27.878541 1653676 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	I0804 08:55:27.878563 1653676 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	I0804 08:55:27.878570 1653676 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	I0804 08:55:27.878580 1653676 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.34.0-beta.0
	I0804 08:55:27.878585 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.6.1-1
	I0804 08:55:27.878590 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.5.21-0
	I0804 08:55:27.878595 1653676 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.12.1
	I0804 08:55:27.878599 1653676 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0804 08:55:27.878603 1653676 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 08:55:27.879821 1653676 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 08:55:27.879847 1653676 docker.go:633] Images already preloaded, skipping extraction
	I0804 08:55:27.879906 1653676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 08:55:27.898058 1653676 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	I0804 08:55:27.898084 1653676 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	I0804 08:55:27.898091 1653676 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	I0804 08:55:27.898095 1653676 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.34.0-beta.0
	I0804 08:55:27.898099 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.6.1-1
	I0804 08:55:27.898103 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.5.21-0
	I0804 08:55:27.898109 1653676 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.12.1
	I0804 08:55:27.898113 1653676 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0804 08:55:27.898117 1653676 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 08:55:27.898143 1653676 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 08:55:27.898167 1653676 cache_images.go:85] Images are preloaded, skipping loading
	I0804 08:55:27.898180 1653676 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0-beta.0 docker true true} ...
	I0804 08:55:27.898290 1653676 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-699837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 08:55:27.898340 1653676 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 08:55:27.944494 1653676 command_runner.go:130] > cgroupfs
	I0804 08:55:27.946023 1653676 cni.go:84] Creating CNI manager for ""
	I0804 08:55:27.946045 1653676 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 08:55:27.946061 1653676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 08:55:27.946082 1653676 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-699837 NodeName:functional-699837 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 08:55:27.946247 1653676 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-699837"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 08:55:27.946320 1653676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 08:55:27.953892 1653676 command_runner.go:130] > kubeadm
	I0804 08:55:27.953910 1653676 command_runner.go:130] > kubectl
	I0804 08:55:27.953915 1653676 command_runner.go:130] > kubelet
	I0804 08:55:27.954677 1653676 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 08:55:27.954730 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 08:55:27.962553 1653676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0804 08:55:27.978365 1653676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 08:55:27.994068 1653676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0804 08:55:28.009976 1653676 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0804 08:55:28.013276 1653676 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0804 08:55:28.013353 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:28.101449 1653676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 08:55:28.112250 1653676 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837 for IP: 192.168.49.2
	I0804 08:55:28.112270 1653676 certs.go:194] generating shared ca certs ...
	I0804 08:55:28.112291 1653676 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.112464 1653676 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 08:55:28.112506 1653676 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 08:55:28.112516 1653676 certs.go:256] generating profile certs ...
	I0804 08:55:28.112631 1653676 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.key
	I0804 08:55:28.112686 1653676 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key.5971bdc2
	I0804 08:55:28.112722 1653676 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key
	I0804 08:55:28.112733 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 08:55:28.112747 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 08:55:28.112759 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 08:55:28.112772 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 08:55:28.112783 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 08:55:28.112795 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 08:55:28.112808 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 08:55:28.112819 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 08:55:28.112866 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 08:55:28.112898 1653676 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 08:55:28.112907 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 08:55:28.112929 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 08:55:28.112954 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 08:55:28.112975 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 08:55:28.113011 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 08:55:28.113036 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.113051 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.113068 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem -> /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.113660 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 08:55:28.135009 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 08:55:28.155784 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 08:55:28.176520 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 08:55:28.197558 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 08:55:28.218349 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 08:55:28.239391 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 08:55:28.259973 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 08:55:28.280899 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 08:55:28.301872 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 08:55:28.322816 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 08:55:28.343561 1653676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 08:55:28.359122 1653676 ssh_runner.go:195] Run: openssl version
	I0804 08:55:28.363884 1653676 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0804 08:55:28.364128 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 08:55:28.372266 1653676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.375320 1653676 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.375365 1653676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.375402 1653676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.381281 1653676 command_runner.go:130] > b5213941
	I0804 08:55:28.381530 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 08:55:28.388997 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 08:55:28.397048 1653676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.399946 1653676 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.399991 1653676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.400016 1653676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.406052 1653676 command_runner.go:130] > 51391683
	I0804 08:55:28.406304 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 08:55:28.413987 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 08:55:28.422286 1653676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.425317 1653676 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.425349 1653676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.425376 1653676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.431562 1653676 command_runner.go:130] > 3ec20f2e
	I0804 08:55:28.431844 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 08:55:28.439543 1653676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 08:55:28.442556 1653676 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 08:55:28.442581 1653676 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0804 08:55:28.442590 1653676 command_runner.go:130] > Device: 801h/2049d	Inode: 822354      Links: 1
	I0804 08:55:28.442597 1653676 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0804 08:55:28.442603 1653676 command_runner.go:130] > Access: 2025-08-04 08:51:18.188665144 +0000
	I0804 08:55:28.442607 1653676 command_runner.go:130] > Modify: 2025-08-04 08:47:12.683556584 +0000
	I0804 08:55:28.442614 1653676 command_runner.go:130] > Change: 2025-08-04 08:47:12.683556584 +0000
	I0804 08:55:28.442619 1653676 command_runner.go:130] >  Birth: 2025-08-04 08:47:12.683556584 +0000
	I0804 08:55:28.442691 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 08:55:28.448546 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.448806 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 08:55:28.454608 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.454889 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 08:55:28.460580 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.460805 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 08:55:28.466615 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.466839 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 08:55:28.472661 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.472705 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 08:55:28.478445 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.478508 1653676 kubeadm.go:392] StartCluster: {Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 08:55:28.478619 1653676 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 08:55:28.496419 1653676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 08:55:28.503804 1653676 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0804 08:55:28.503825 1653676 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0804 08:55:28.503833 1653676 command_runner.go:130] > /var/lib/minikube/etcd:
	I0804 08:55:28.504531 1653676 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 08:55:28.504546 1653676 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0804 08:55:28.504584 1653676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 08:55:28.511980 1653676 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 08:55:28.512384 1653676 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-699837" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.512513 1653676 kubeconfig.go:62] /home/jenkins/minikube-integration/21223-1578987/kubeconfig needs updating (will repair): [kubeconfig missing "functional-699837" cluster setting kubeconfig missing "functional-699837" context setting]
	I0804 08:55:28.512791 1653676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.513199 1653676 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.513384 1653676 kapi.go:59] client config for functional-699837: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt", KeyFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.key", CAFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2595680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0804 08:55:28.513811 1653676 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0804 08:55:28.513826 1653676 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0804 08:55:28.513833 1653676 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0804 08:55:28.513839 1653676 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0804 08:55:28.513844 1653676 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0804 08:55:28.513876 1653676 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0804 08:55:28.514257 1653676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 08:55:28.521605 1653676 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0804 08:55:28.521634 1653676 kubeadm.go:593] duration metric: took 17.082556ms to restartPrimaryControlPlane
	I0804 08:55:28.521645 1653676 kubeadm.go:394] duration metric: took 43.142663ms to StartCluster
	I0804 08:55:28.521666 1653676 settings.go:142] acquiring lock: {Name:mk3d97f9903fe59355ed92bb92489c9b9834574a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.521736 1653676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.522230 1653676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.522435 1653676 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 08:55:28.522512 1653676 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 08:55:28.522651 1653676 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 08:55:28.522656 1653676 addons.go:69] Setting storage-provisioner=true in profile "functional-699837"
	I0804 08:55:28.522728 1653676 addons.go:238] Setting addon storage-provisioner=true in "functional-699837"
	I0804 08:55:28.522681 1653676 addons.go:69] Setting default-storageclass=true in profile "functional-699837"
	I0804 08:55:28.522800 1653676 host.go:66] Checking if "functional-699837" exists ...
	I0804 08:55:28.522810 1653676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-699837"
	I0804 08:55:28.523050 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:28.523236 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:28.524415 1653676 out.go:177] * Verifying Kubernetes components...
	I0804 08:55:28.525459 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:28.542729 1653676 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.542941 1653676 kapi.go:59] client config for functional-699837: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt", KeyFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.key", CAFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2595680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0804 08:55:28.543225 1653676 addons.go:238] Setting addon default-storageclass=true in "functional-699837"
	I0804 08:55:28.543255 1653676 host.go:66] Checking if "functional-699837" exists ...
	I0804 08:55:28.543552 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:28.543853 1653676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 08:55:28.545053 1653676 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:28.545072 1653676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 08:55:28.545126 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:28.560950 1653676 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:28.560976 1653676 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 08:55:28.561028 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:28.561396 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:28.582841 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:28.617980 1653676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 08:55:28.628515 1653676 node_ready.go:35] waiting up to 6m0s for node "functional-699837" to be "Ready" ...
	I0804 08:55:28.628655 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:28.628715 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:28.628984 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:28.669259 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:28.681042 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:28.723292 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:28.723334 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.723359 1653676 retry.go:31] will retry after 184.647945ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.732373 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:28.732422 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.732443 1653676 retry.go:31] will retry after 304.201438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.908717 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:28.958881 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:28.958925 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.958945 1653676 retry.go:31] will retry after 476.117899ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.037179 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:29.088413 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:29.088468 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.088491 1653676 retry.go:31] will retry after 197.264107ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.129639 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:29.129716 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:29.130032 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:29.286304 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:29.334473 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:29.337029 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.337065 1653676 retry.go:31] will retry after 823.238005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.435237 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:29.482679 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:29.485403 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.485436 1653676 retry.go:31] will retry after 800.644745ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.629726 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:29.629799 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:29.630104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:30.128837 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:30.128917 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:30.129285 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:30.161434 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:30.213167 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.213231 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.213275 1653676 retry.go:31] will retry after 656.353253ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.286342 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:30.334470 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.336981 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.337012 1653676 retry.go:31] will retry after 508.253019ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.629489 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:30.629586 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:30.629950 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:30.630017 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:30.845486 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:30.869953 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:30.897779 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.897836 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.897862 1653676 retry.go:31] will retry after 1.094600532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.922225 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.922291 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.922314 1653676 retry.go:31] will retry after 805.303636ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:31.129681 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:31.129760 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:31.130110 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:31.628691 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:31.628775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:31.629122 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:31.728325 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:31.779677 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:31.779728 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:31.779748 1653676 retry.go:31] will retry after 2.236258385s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:31.993064 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:32.044458 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:32.044511 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:32.044552 1653676 retry.go:31] will retry after 1.503507165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:32.129706 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:32.129775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:32.130079 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:32.629732 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:32.629813 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:32.630171 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:32.630256 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:33.128768 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:33.128853 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:33.129210 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:33.548844 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:33.599998 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:33.600058 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:33.600081 1653676 retry.go:31] will retry after 1.994543648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:33.629251 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:33.629339 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:33.629634 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:34.017206 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:34.068508 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:34.068573 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:34.068597 1653676 retry.go:31] will retry after 3.823609715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:34.128678 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:34.128751 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:34.129067 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:34.629688 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:34.629764 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:34.630098 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:35.129721 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:35.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:35.130115 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:35.130189 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:35.595749 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:35.629120 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:35.629209 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:35.629582 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:35.645323 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:35.647845 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:35.647880 1653676 retry.go:31] will retry after 3.559085278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:36.129701 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:36.129780 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:36.130117 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:36.628869 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:36.628953 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:36.629336 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:37.129085 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:37.129171 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:37.129515 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:37.629335 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:37.629411 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:37.629704 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:37.629765 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:37.893118 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:37.941760 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:37.944423 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:37.944452 1653676 retry.go:31] will retry after 4.996473933s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:38.128782 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:38.128878 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:38.129260 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:38.628699 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:38.628786 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:38.629112 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:39.128699 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:39.128786 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:39.129139 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:39.207320 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:39.257569 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:39.257615 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:39.257640 1653676 retry.go:31] will retry after 8.124151658s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:39.629122 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:39.629208 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:39.629537 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:40.129218 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:40.129325 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:40.129628 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:40.129693 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:40.629297 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:40.629368 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:40.629673 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:41.129406 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:41.129495 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:41.129887 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:41.629498 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:41.629579 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:41.629928 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:42.129549 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:42.129645 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:42.130002 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:42.130063 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:42.629629 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:42.629709 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:42.630062 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:42.941490 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:42.990741 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:42.993232 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:42.993279 1653676 retry.go:31] will retry after 4.825851231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:43.129602 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:43.129690 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:43.130065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:43.628834 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:43.628909 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:43.629270 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:44.129025 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:44.129120 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:44.129526 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:44.629359 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:44.629431 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:44.629737 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:44.629803 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:45.129549 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:45.129627 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:45.129961 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:45.628704 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:45.628789 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:45.629130 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:46.128858 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:46.128936 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:46.129295 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:46.629013 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:46.629096 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:46.629444 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:47.129179 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:47.129266 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:47.129609 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:47.129674 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:47.381978 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:47.430195 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:47.433093 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:47.433123 1653676 retry.go:31] will retry after 10.012002454s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:47.629500 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:47.629573 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:47.629910 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:47.820313 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:47.870430 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:47.870476 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:47.870493 1653676 retry.go:31] will retry after 10.075489679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:48.128804 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:48.128895 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:48.129267 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:48.629030 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:48.629141 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:48.629503 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:49.129320 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:49.129409 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:49.129785 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:49.129864 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:49.629600 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:49.629674 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:49.629992 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:50.128745 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:50.128835 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:50.129191 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:50.628937 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:50.629015 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:50.629395 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:51.128731 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:51.128818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:51.129169 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:51.628936 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:51.629009 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:51.629384 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:51.629473 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:52.129137 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:52.129221 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:52.129575 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:52.629361 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:52.629431 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:52.629735 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:53.129540 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:53.129620 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:53.129949 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:53.628671 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:53.628747 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:53.629071 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:54.128801 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:54.128899 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:54.129261 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:54.129334 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:54.629005 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:54.629105 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:54.629481 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:55.129371 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:55.129447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:55.129804 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:55.629597 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:55.629674 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:55.630007 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:56.128707 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:56.128802 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:57.445382 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:57.946208 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:56:06.129570 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10000
	W0804 08:56:06.129644 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:56:06.129694 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:06.129736 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:16.130254 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10000
	W0804 08:56:16.130338 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:56:16.130408 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:16.130480 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:16.262782 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=132
	I0804 08:56:17.263910 1653676 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8441/api/v1/nodes/functional-699837"
	I0804 08:56:17.264149 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:17.264472 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:17.264610 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:17.264716 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:17.264973 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:17.267370 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38248->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267420 1653676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (19.822003727s)
	W0804 08:56:17.267450 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38248->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267470 1653676 retry.go:31] will retry after 18.146841122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38248->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267784 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38252->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267815 1653676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (19.321577292s)
	W0804 08:56:17.267836 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38252->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267852 1653676 retry.go:31] will retry after 19.077492147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38252->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.629331 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:17.629410 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:17.629777 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:18.129400 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:18.129489 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:18.129796 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:18.629536 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:18.629618 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:18.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:18.630021 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:19.129659 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:19.129746 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:19.130112 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:19.628758 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:19.628835 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:19.629178 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:20.128732 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:20.128806 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:20.129156 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:20.628674 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:20.628755 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:20.629081 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:21.128792 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:21.128867 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:21.129234 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:21.129324 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:21.629020 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:21.629101 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:21.629489 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:22.129299 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:22.129389 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:22.129751 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:22.629584 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:22.629664 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:22.629996 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:23.128722 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:23.128828 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:23.129192 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:23.628966 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:23.629055 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:23.629374 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:23.629437 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:24.129128 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:24.129225 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:24.129600 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:24.629381 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:24.629467 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:24.629838 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:25.129635 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:25.129755 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:25.130108 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:25.628815 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:25.628905 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:25.629282 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:26.128941 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:26.129024 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:26.129386 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:26.129469 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:26.629153 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:26.629266 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:26.629626 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:27.129444 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:27.129526 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:27.129867 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:27.629658 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:27.629737 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:27.630140 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:28.128857 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:28.128947 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:28.129307 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:28.629734 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:28.629837 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:28.630240 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:28.630338 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:29.129055 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:29.129168 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:29.129536 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:29.629363 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:29.629443 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:29.629791 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:30.129636 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:30.129710 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:30.130048 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:30.628774 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:30.628849 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:30.629212 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:31.128887 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:31.128984 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:31.129358 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:31.129426 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:31.629089 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:31.629164 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:31.629502 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:32.129335 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:32.129440 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:32.129852 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:32.629638 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:32.629720 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:32.630056 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:33.128794 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:33.128882 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:33.129261 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:33.628999 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:33.629072 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:33.629432 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:33.629497 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:34.129184 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:34.129308 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:34.129684 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:34.629474 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:34.629546 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:34.629872 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:35.129661 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:35.129748 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:35.130119 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:35.414447 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:56:35.463330 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:35.466231 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:35.466267 1653676 retry.go:31] will retry after 13.873476046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:35.629483 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:35.629558 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:35.629897 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:35.629960 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:36.129639 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:36.129713 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:36.130046 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:36.346375 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:56:36.394439 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:36.396962 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:36.396996 1653676 retry.go:31] will retry after 20.764306788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:36.629373 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:36.629461 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:36.629797 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:37.129619 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:37.129700 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:37.130049 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:37.628786 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:37.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:37.629214 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:38.129001 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:38.129075 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:38.129435 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:38.129504 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:38.629094 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:38.629186 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:38.629537 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:39.129329 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:39.129403 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:39.129733 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:39.629535 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:39.629607 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:39.629940 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:40.129719 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:40.129801 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:40.130145 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:40.130216 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:40.628884 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:40.628964 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:40.629317 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:41.128956 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:41.129035 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:41.129355 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:41.629076 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:41.629150 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:41.629485 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:42.129286 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:42.129362 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:42.129691 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:42.629456 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:42.629537 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:42.629869 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:42.629938 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:43.129673 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:43.129756 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:43.130100 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:43.628809 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:43.628889 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:43.629208 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:44.128939 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:44.129019 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:44.129378 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:44.629097 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:44.629182 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:44.629521 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:45.129310 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:45.129387 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:45.129760 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:45.129832 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:45.629562 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:45.629633 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:45.630029 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:46.128691 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:46.128772 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:46.129112 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:46.628845 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:46.628920 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:46.629291 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:47.129029 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:47.129126 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:47.129500 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:47.629337 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:47.629420 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:47.629741 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:47.629802 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:48.129626 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:48.129722 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:48.130077 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:48.628742 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:48.628836 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:48.629189 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:49.128743 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:49.128827 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:49.129185 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:49.340493 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:56:49.391267 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:49.391322 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:49.391344 1653676 retry.go:31] will retry after 22.530122873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:49.629701 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:49.629775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:49.630094 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:49.630167 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:50.128781 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:50.128853 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:50.129231 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:50.628838 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:50.628912 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:50.629276 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:51.129234 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:51.129318 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:51.129637 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:51.629350 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:51.629441 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:51.629759 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:52.129549 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:52.129656 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:52.129995 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:52.130058 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:52.628710 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:52.628778 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:52.629090 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:53.128873 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:53.128994 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:53.129417 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:53.629155 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:53.629225 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:53.629551 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:54.129336 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:54.129409 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:54.129789 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:54.629582 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:54.629657 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:54.629978 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:54.630042 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:55.128737 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:55.128827 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:55.129209 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:55.629562 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:55.629630 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:55.629995 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:56.129718 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:56.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:56.130127 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:56.628839 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:56.628957 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:56.629326 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:57.129049 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:57.129165 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:57.129545 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:57.129614 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:57.161690 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:56:57.212094 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:57.212172 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:57.212321 1653676 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 08:56:57.629703 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:57.629786 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:57.630137 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:58.128910 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:58.128986 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:58.129348 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:58.629128 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:58.629212 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:58.629557 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:59.129348 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:59.129423 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:59.129768 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:59.129831 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:59.629552 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:59.629630 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:59.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:00.128668 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:00.128748 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:00.129104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:00.628883 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:00.628972 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:00.629344 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:01.128990 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:01.129091 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:01.129447 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:01.629187 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:01.629284 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:01.629625 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:01.629697 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:02.129438 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:02.129511 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:02.129847 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:02.629620 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:02.629714 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:02.630041 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:03.128760 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:03.128862 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:03.129196 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:03.628968 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:03.629065 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:03.629415 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:04.129145 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:04.129220 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:04.129570 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:04.129643 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:04.629351 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:04.629445 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:04.629746 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:05.129583 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:05.129661 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:05.129993 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:05.628708 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:05.628794 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:05.629079 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:06.128832 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:06.128925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:06.129318 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:06.629043 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:06.629138 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:06.629480 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:06.629558 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:07.129326 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:07.129425 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:07.129785 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:07.629601 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:07.629694 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:07.630065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:08.128801 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:08.128909 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:08.129315 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:08.629044 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:08.629145 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:08.629528 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:08.629593 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:09.129358 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:09.129453 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:09.129910 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:09.629675 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:09.629754 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:09.630073 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:10.128808 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:10.128885 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:10.129234 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:10.628993 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:10.629089 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:10.629434 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:11.129231 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:11.129347 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:11.129707 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:11.129770 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:11.629527 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:11.629607 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:11.629894 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:11.922305 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:57:11.970691 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:57:11.973096 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:57:11.973263 1653676 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 08:57:11.975142 1653676 out.go:177] * Enabled addons: 
	I0804 08:57:11.976503 1653676 addons.go:514] duration metric: took 1m43.454009966s for enable addons: enabled=[]
	I0804 08:57:12.129480 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:12.129579 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:12.129915 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:12.629535 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:12.629640 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:12.629960 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:13.129603 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:13.129676 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:13.130018 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:13.130084 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:13.629651 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:13.629730 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:13.630028 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:14.129674 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:14.129818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:14.130187 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:14.628738 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:14.628810 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:14.629106 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:15.128681 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:15.128756 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:15.129116 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:15.628700 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:15.628781 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:15.629089 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:15.629155 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:16.128845 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:16.128921 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:16.129302 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:16.628840 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:16.628918 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:16.629233 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:17.128809 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:17.128893 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:17.129257 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:17.628792 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:17.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:17.629202 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:17.629293 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:18.128759 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:18.128847 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:18.129200 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:18.629041 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:18.629121 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:18.629468 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:19.129039 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:19.129112 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:19.129489 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:19.629035 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:19.629105 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:19.629466 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:19.629532 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:20.129056 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:20.129136 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:20.129527 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:20.629075 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:20.629154 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:20.629482 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:21.129294 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:21.129367 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:21.129717 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:21.629359 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:21.629463 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:21.629764 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:21.629831 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:22.129365 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:22.129439 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:22.129781 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:22.629426 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:22.629501 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:22.629789 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:23.129450 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:23.129535 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:23.129870 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:23.629332 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:23.629418 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:23.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:24.128868 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:24.128960 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:24.129333 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:24.129416 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:24.628863 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:24.628939 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:24.629295 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:25.128809 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:25.128887 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:25.129269 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:25.629006 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:25.629081 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:25.629396 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:26.129192 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:26.129303 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:26.129672 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:26.129741 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:26.629536 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:26.629611 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:26.629914 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:27.129705 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:27.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:27.130156 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:27.628879 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:27.628961 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:27.629280 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:28.129023 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:28.129114 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:28.129510 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:28.629296 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:28.629387 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:28.629697 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:28.629765 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:29.129519 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:29.129613 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:29.129968 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:29.628696 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:29.628770 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:29.629059 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:30.128786 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:30.128880 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:30.129235 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:30.628979 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:30.629054 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:30.629304 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:31.129276 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:31.129363 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:31.129719 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:31.129793 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:31.629528 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:31.629615 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:31.629920 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:32.128690 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:32.128765 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:32.129098 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:32.628838 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:32.628956 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:32.629288 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:33.129003 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:33.129091 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:33.129461 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:33.629193 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:33.629295 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:33.629610 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:33.629682 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:34.129449 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:34.129539 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:34.129898 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:34.629687 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:34.629766 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:34.630068 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:35.128782 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:35.128868 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:35.129222 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:35.628979 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:35.629051 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:35.629387 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:36.129189 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:36.129297 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:36.129671 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:36.129763 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:36.629508 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:36.629584 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:36.629873 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:37.129696 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:37.129776 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:37.130132 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:37.628857 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:37.628938 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:37.629221 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:38.128990 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:38.129078 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:38.129487 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:38.629184 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:38.629289 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:38.629594 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:38.629667 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:39.129364 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:39.129441 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:39.129810 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:39.629603 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:39.629674 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:39.629968 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:40.128718 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:40.128797 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:40.129178 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:40.628945 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:40.629021 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:40.629364 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:41.129136 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:41.129253 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:41.129612 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:41.129682 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:41.629452 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:41.629530 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:41.629831 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:42.129618 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:42.129707 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:42.130079 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:42.628760 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:42.628838 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:42.629155 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:43.128868 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:43.128970 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:43.129365 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:43.629090 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:43.629163 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:43.629503 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:43.629565 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:44.129335 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:44.129433 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:44.129785 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:44.629577 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:44.629649 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:44.629949 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:45.128664 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:45.128759 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:45.129131 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:45.628854 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:45.628932 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:45.629229 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:46.128970 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:46.129047 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:46.129442 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:46.129517 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:46.629268 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:46.629344 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:46.629668 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:47.129457 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:47.129529 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:47.129867 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:47.629659 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:47.629734 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:47.630045 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:48.128764 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:48.128839 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:48.129183 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:48.628996 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:48.629085 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:48.629417 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:48.629493 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:49.129179 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:49.129288 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:49.129668 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:49.629441 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:49.629513 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:49.629806 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:50.129603 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:50.129678 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:50.130019 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:50.628730 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:50.628803 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:50.629119 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:51.128835 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:51.128916 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:51.129293 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:51.129364 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:51.629058 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:51.629136 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:51.629474 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:52.129201 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:52.129298 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:52.129723 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:52.629568 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:52.629654 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:52.630018 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:53.128764 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:53.128844 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:53.129204 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:53.628946 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:53.629019 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:53.629368 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:53.629442 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:54.129146 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:54.129225 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:54.129608 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:54.629341 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:54.629417 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:54.629719 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:55.129545 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:55.129619 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:55.129967 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:55.628701 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:55.628776 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:55.629095 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:56.128809 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:56.128887 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:56.129279 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:56.129347 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:56.629019 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:56.629096 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:56.629435 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:57.129166 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:57.129283 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:57.129655 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:57.629456 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:57.629534 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:57.629859 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:58.129657 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:58.129755 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:58.130109 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:58.130182 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:58.628778 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:58.628892 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:58.629216 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:59.128942 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:59.129046 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:59.129427 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:59.629154 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:59.629257 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:59.629579 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:00.129357 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:00.129459 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:00.129797 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:00.629587 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:00.629677 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:00.630022 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:00.630087 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:01.128755 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:01.128831 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:01.129179 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:01.628959 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:01.629054 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:01.629420 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:02.129182 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:02.129295 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:02.129668 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:02.629476 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:02.629572 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:02.629862 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:03.129679 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:03.129759 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:03.130099 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:03.130172 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:03.628846 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:03.628948 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:03.629308 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:04.129055 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:04.129134 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:04.129501 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:04.629285 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:04.629371 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:04.629678 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:05.129485 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:05.129556 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:05.129895 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:05.629689 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:05.629775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:05.630092 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:05.630166 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:06.128794 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:06.128884 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:06.129262 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:06.628981 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:06.629094 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:06.629442 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:07.129153 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:07.129236 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:07.129612 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:07.629373 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:07.629460 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:07.629767 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:08.129560 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:08.129642 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:08.129999 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:08.130067 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:08.628667 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:08.628761 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:08.629105 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:09.128826 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:09.128902 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:09.129208 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:09.628951 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:09.629038 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:09.629355 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:10.129067 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:10.129144 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:10.129526 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:10.629346 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:10.629440 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:10.629755 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:10.629825 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:11.129536 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:11.129607 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:11.129931 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:11.628656 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:11.628740 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:11.629041 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:12.128773 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:12.128847 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:12.129188 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:12.628944 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:12.629039 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:12.629370 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:13.129112 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:13.129185 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:13.129528 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:13.129601 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:13.628854 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:13.628929 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:13.629262 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:14.129022 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:14.129107 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:14.129456 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:14.629179 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:14.629262 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:14.629560 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:15.129358 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:15.129438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:15.129768 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:15.129842 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:15.629588 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:15.629663 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:15.629993 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:16.128722 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:16.128807 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:16.129155 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:16.628888 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:16.628968 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:16.629289 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:17.128871 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:17.128958 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:17.129331 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:17.629089 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:17.629163 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:17.629498 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:17.629579 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:18.129331 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:18.129413 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:18.129748 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:18.629352 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:18.629431 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:18.629731 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:19.129531 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:19.129601 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:19.129926 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:19.629715 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:19.629793 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:19.630096 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:19.630165 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:20.128817 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:20.128892 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:20.129221 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:20.628986 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:20.629062 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:20.629379 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:21.129140 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:21.129256 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:21.129611 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:21.629346 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:21.629422 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:21.629705 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:22.129503 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:22.129592 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:22.129936 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:22.130013 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:22.628702 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:22.628771 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:22.629065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:23.128773 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:23.128856 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:23.129193 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:23.628915 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:23.629017 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:23.629329 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:24.129041 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:24.129130 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:24.129485 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:24.629265 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:24.629368 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:24.629656 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:24.629721 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:25.129446 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:25.129542 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:25.129838 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:25.629614 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:25.629692 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:25.630005 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:26.128734 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:26.128822 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:26.129143 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:26.628855 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:26.628945 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:26.629295 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:27.129001 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:27.129078 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:27.129430 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:27.129497 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:27.629154 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:27.629226 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:27.629562 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:28.129344 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:28.129447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:28.129769 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:28.629456 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:28.629542 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:28.629856 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:29.129664 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:29.129750 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:29.130110 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:29.130200 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:29.628750 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:29.628825 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:29.629116 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:30.128860 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:30.128943 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:30.129300 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:30.629025 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:30.629107 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:30.629409 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:31.129309 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:31.129383 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:31.129732 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:31.629506 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:31.629578 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:31.629869 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:31.629930 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:32.129669 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:32.129745 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:32.130096 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:32.628810 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:32.628890 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:32.629161 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:33.128895 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:33.128972 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:33.129352 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:33.629078 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:33.629161 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:33.629537 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:34.129351 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:34.129430 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:34.129807 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:34.129887 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:34.629642 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:34.629714 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:34.630028 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:35.128785 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:35.128867 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:35.129207 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:35.628963 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:35.629038 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:35.629350 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:36.129133 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:36.129206 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:36.129495 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:36.629057 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:36.629152 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:36.629476 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:36.629541 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:37.129344 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:37.129435 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:37.129779 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:37.629589 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:37.629665 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:37.629987 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:38.128723 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:38.128818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:38.129170 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:38.628949 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:38.629043 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:38.629367 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:39.129078 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:39.129177 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:39.129555 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:39.129622 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:39.629381 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:39.629467 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:39.629800 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:40.129606 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:40.129705 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:40.130062 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:40.628786 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:40.628889 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:40.629233 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:41.129024 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:41.129100 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:41.129462 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:41.629280 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:41.629379 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:41.629701 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:41.629762 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:42.129521 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:42.129597 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:42.129950 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:42.628667 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:42.628756 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:42.629073 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:43.128819 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:43.128897 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:43.129279 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:43.629033 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:43.629148 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:43.629489 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:44.129324 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:44.129407 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:44.129750 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:44.129816 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:44.629574 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:44.629658 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:44.629972 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:45.128703 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:45.128778 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:45.129125 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:45.628842 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:45.628933 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:45.629252 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:46.128948 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:46.129033 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:46.129380 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:46.629108 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:46.629185 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:46.629520 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:46.629580 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:47.129340 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:47.129419 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:47.129767 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:47.629563 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:47.629638 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:47.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:48.128670 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:48.128751 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:48.129104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:48.629702 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:48.629776 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:48.630085 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:48.630146 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:49.128823 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:49.128899 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:49.129229 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:49.628981 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:49.629065 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:49.629392 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:50.129122 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:50.129198 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:50.129554 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:50.629352 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:50.629447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:50.629788 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:51.129551 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:51.129636 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:51.129966 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:51.130030 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:51.628723 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:51.628822 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:51.629134 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:52.128861 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:52.128966 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:52.129334 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:52.629047 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:52.629124 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:52.629436 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:53.129166 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:53.129271 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:53.129578 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:53.629347 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:53.629425 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:53.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:53.629789 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:54.129531 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:54.129608 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:54.130022 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:54.628732 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:54.628807 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:54.629107 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:55.128818 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:55.128901 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:55.129281 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:55.629003 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:55.629084 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:55.629411 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:56.129310 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:56.129399 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:56.129752 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:56.129817 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:56.629559 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:56.629638 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:56.629927 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:57.129729 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:57.129818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:57.130192 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:57.628939 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:57.629019 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:57.629349 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:58.129065 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:58.129186 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:58.129616 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:58.629318 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:58.629398 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:58.629699 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:58.629757 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:59.129513 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:59.129603 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:59.129965 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:59.628703 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:59.628781 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:59.629083 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:00.128805 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:00.128896 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:00.129279 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:00.629019 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:00.629098 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:00.629464 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:01.129270 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:01.129348 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:01.129717 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:01.129794 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:01.629537 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:01.629608 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:01.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:02.128689 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:02.128769 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:02.129142 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:02.628902 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:02.628987 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:02.629315 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:03.129038 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:03.129117 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:03.129496 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:03.629371 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:03.629457 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:03.629773 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:03.629837 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:04.129591 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:04.129684 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:14.133399 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10003
	W0804 08:59:14.133474 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:59:14.133535 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:14.133571 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:24.134577 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10000
	W0804 08:59:24.134670 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:59:24.134743 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:24.134791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:24.447100 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=312
	I0804 08:59:25.448003 1653676 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8441/api/v1/nodes/functional-699837"
	I0804 08:59:25.448109 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:25.448371 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:25.448473 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:25.448503 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:25.448708 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:25.629198 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:25.629320 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:25.629693 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:26.129362 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:26.129438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:26.129786 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:26.629562 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:26.629634 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:26.629913 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:26.629981 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:27.129710 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:27.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:27.130145 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:27.628843 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:27.628915 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:27.629211 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:28.128958 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:28.129049 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:28.129414 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:28.629057 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:28.629131 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:28.629437 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:29.129142 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:29.129215 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:29.129570 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:29.129634 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:29.629351 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:29.629434 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:29.629732 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:30.129550 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:30.129627 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:30.129981 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:30.628711 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:30.628785 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:30.629088 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:31.128761 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:31.128837 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:31.129194 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:31.628935 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:31.629013 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:31.629357 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:31.629423 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:32.129102 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:32.129207 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:32.129598 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:32.629343 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:32.629412 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:32.629682 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:33.129483 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:33.129571 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:33.129937 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:33.628685 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:33.628761 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:33.629071 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:34.128794 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:34.128880 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:34.129196 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:34.129292 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:34.628955 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:34.629026 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:34.629332 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:35.129092 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:35.129172 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:35.129540 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:35.629393 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:35.629466 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:35.629788 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:36.129551 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:36.129629 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:36.129981 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:36.130049 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:36.628714 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:36.628796 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:36.629109 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:37.128919 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:37.128993 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:37.129345 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:37.629059 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:37.629147 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:37.629463 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:38.129234 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:38.129326 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:38.129664 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:38.629351 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:38.629432 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:38.629732 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:38.629805 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:39.129576 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:39.129650 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:39.129997 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:39.628740 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:39.628825 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:39.629123 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:40.128863 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:40.128946 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:40.129324 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:40.629061 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:40.629132 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:40.629464 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:41.129329 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:41.129415 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:41.129770 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:41.129836 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:41.629564 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:41.629638 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:41.629926 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:42.129712 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:42.129803 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:42.130147 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:42.628855 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:42.628932 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:42.629230 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:43.128970 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:43.129055 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:43.129407 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:43.629110 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:43.629193 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:43.629549 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:43.629613 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:44.129360 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:44.129442 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:44.129809 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:44.629604 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:44.629695 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:44.629982 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:45.128765 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:45.128844 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:45.129221 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:45.628969 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:45.629067 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:45.629365 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:46.129219 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:46.129334 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:46.129701 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:46.129778 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:46.629522 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:46.629594 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:46.629887 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:47.129668 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:47.129774 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:47.130135 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:47.628848 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:47.628924 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:47.629222 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:48.128974 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:48.129074 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:48.129460 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:48.629189 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:48.629275 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:48.629575 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:48.629637 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:49.129365 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:49.129460 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:49.129826 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:49.629589 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:49.629663 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:49.629948 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:50.128684 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:50.128784 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:50.129153 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:50.628866 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:50.628940 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:50.629236 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:51.128964 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:51.129053 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:51.129443 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:51.129520 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:51.629181 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:51.629285 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:51.629597 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:52.129363 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:52.129439 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:52.129782 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:52.629564 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:52.629637 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:52.629921 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:53.128676 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:53.128760 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:53.129117 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:53.628840 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:53.628925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:53.629216 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:53.629319 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:54.129011 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:54.129119 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:54.129458 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:54.629169 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:54.629255 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:54.629563 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:55.129370 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:55.129456 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:55.129803 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:55.629586 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:55.629656 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:55.629948 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:55.630021 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:56.129716 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:56.129807 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:56.130158 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:56.628872 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:56.628960 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:56.629280 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:57.129030 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:57.129134 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:57.129533 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:57.629322 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:57.629394 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:57.629681 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:58.129475 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:58.129571 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:58.129969 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:58.130041 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:58.629691 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:58.629768 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:58.630065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:59.128782 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:59.128877 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:59.129234 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:59.628979 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:59.629051 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:59.629387 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:00.129109 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:00.129205 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:00.129657 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:00.629456 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:00.629529 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:00.629872 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:00.629939 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:01.129658 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:01.129735 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:01.130048 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:01.628777 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:01.628856 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:01.629190 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:02.128935 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:02.129010 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:02.129319 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:02.628797 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:02.628877 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:02.629137 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:03.128821 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:03.128896 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:03.129167 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:03.129224 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:03.628891 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:03.628974 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:03.629299 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:04.129012 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:04.129096 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:04.129462 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:04.629177 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:04.629276 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:04.629597 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:05.129034 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:05.129129 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:05.129588 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:05.129664 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:05.629416 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:05.629491 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:05.629807 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:06.129708 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:06.129798 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:06.130177 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:06.628914 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:06.628986 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:06.629309 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:07.129052 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:07.129152 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:07.129545 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:07.629359 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:07.629447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:07.629774 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:07.629843 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:08.129619 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:08.129703 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:08.130076 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:08.628794 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:08.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:08.629209 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:09.128966 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:09.129044 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:09.129548 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:09.629398 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:09.629478 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:09.629790 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:10.129602 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:10.129686 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:10.130062 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:10.130134 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:10.628810 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:10.628888 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:10.629214 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:11.128747 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:11.128824 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:11.129152 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:11.628878 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:11.628954 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:11.629286 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:12.129028 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:12.129106 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:12.129473 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:12.629262 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:12.629338 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:12.629618 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:12.629689 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:13.129417 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:13.129501 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:13.129842 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:13.629621 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:13.629693 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:13.629988 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:14.128745 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:14.128832 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:14.129178 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:14.628945 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:14.629017 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:14.629397 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:15.129144 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:15.129234 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:15.129617 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:15.129699 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:15.629451 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:15.629537 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:15.629859 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:16.129648 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:16.129725 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:16.130080 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:16.628842 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:16.628922 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:16.629262 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:17.128979 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:17.129061 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:17.129404 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:17.629119 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:17.629192 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:17.629516 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:17.629592 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:18.129336 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:18.129414 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:18.129755 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:18.629486 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:18.629564 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:18.629881 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:19.129669 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:19.129760 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:19.130101 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:19.628816 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:19.628890 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:19.629175 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:20.128910 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:20.128984 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:20.129330 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:20.129401 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:20.629078 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:20.629168 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:20.629501 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:21.129330 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:21.129424 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:21.129762 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:21.629541 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:21.629617 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:21.629961 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:22.128702 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:22.128777 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:22.129131 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:22.628835 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:22.628922 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:22.629266 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:22.629330 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:23.128997 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:23.129087 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:23.129464 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:23.629182 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:23.629286 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:23.629610 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:24.129357 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:24.129433 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:24.129789 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:24.629580 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:24.629654 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:24.630004 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:24.630071 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:25.128772 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:25.128875 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:25.129222 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:25.628964 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:25.629038 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:25.629409 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:26.129166 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:26.129260 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:26.129614 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:26.629352 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:26.629430 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:26.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:27.129507 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:27.129584 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:27.129930 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:27.129995 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:27.628677 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:27.628763 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:27.629122 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:28.128831 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:28.128925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:28.129213 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:28.629034 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:28.629122 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:28.629430 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:29.129177 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:29.129276 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:29.129670 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:29.629478 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:29.629549 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:29.629842 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:29.629908 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:30.129649 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:30.129723 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:30.130078 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:30.628813 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:30.628886 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:30.629190 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:31.128911 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:31.128986 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:31.129333 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:31.629040 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:31.629132 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:31.629470 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:32.129197 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:32.129290 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:32.129685 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:32.129763 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:32.629496 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:32.629568 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:32.629869 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:33.129687 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:33.129771 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:33.130108 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:33.628818 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:33.628897 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:33.629202 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:34.128946 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:34.129020 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:34.129415 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:34.629147 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:34.629219 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:34.629558 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:34.629628 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:35.129369 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:35.129455 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:35.129805 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:35.629601 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:35.629676 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:35.629982 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:36.128679 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:36.128768 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:36.129121 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:36.628838 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:36.628914 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:36.629211 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:37.128955 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:37.129054 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:37.129433 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:37.129502 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:37.629160 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:37.629260 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:37.629562 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:38.129342 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:38.129438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:38.129787 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:38.629253 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:38.629328 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:38.629641 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:39.129419 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:39.129511 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:39.129853 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:39.129927 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:39.629656 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:39.629726 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:39.630015 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:40.128736 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:40.128824 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:40.129162 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:40.628753 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:40.628832 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:40.629116 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:41.128932 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:41.129010 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:41.129303 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:41.629089 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:41.629196 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:41.629513 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:41.629580 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:42.129349 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:42.129434 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:42.129769 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:42.629554 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:42.629629 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:42.629873 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:43.129642 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:43.129720 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:43.130046 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:43.628744 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:43.628817 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:43.629115 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:44.128831 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:44.128907 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:44.129297 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:44.129364 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:44.629025 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:44.629100 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:44.629418 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:45.129142 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:45.129218 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:45.129572 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:45.629352 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:45.629425 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:45.629726 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:46.129360 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:46.129445 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:46.129788 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:46.129856 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:46.629588 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:46.629667 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:46.629948 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:47.128666 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:47.128744 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:47.129078 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:47.628771 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:47.628847 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:47.629196 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:48.128923 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:48.129000 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:48.129363 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:48.629072 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:48.629151 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:48.629471 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:48.629534 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:49.129296 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:49.129375 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:49.129725 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:49.629524 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:49.629595 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:49.629882 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:50.129670 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:50.129763 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:50.130141 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:50.628871 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:50.628953 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:50.629283 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:51.129015 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:51.129090 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:51.129476 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:51.129545 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:51.629293 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:51.629378 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:51.629669 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:52.129450 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:52.129528 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:52.129859 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:52.629654 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:52.629726 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:52.630058 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:53.128778 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:53.128856 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:53.129197 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:53.628936 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:53.629015 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:53.629344 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:53.629420 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:54.129104 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:54.129196 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:54.129579 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:54.629357 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:54.629426 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:54.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:55.129436 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:55.129536 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:55.129882 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:55.629646 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:55.629719 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:55.630035 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:55.630107 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:56.128773 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:56.128845 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:56.129181 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:56.628950 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:56.629034 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:56.629378 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:57.129105 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:57.129181 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:57.129559 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:57.629369 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:57.629438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:57.629742 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:58.129515 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:58.129595 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:58.129950 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:58.130034 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:58.628750 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:58.628830 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:58.629147 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:59.128851 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:59.128928 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:59.129309 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:59.629042 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:59.629121 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:59.629455 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:00.129167 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:00.129270 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:00.129632 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:00.629423 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:00.629498 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:00.629793 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:00.629863 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:01.129591 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:01.129676 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:01.130023 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:01.628726 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:01.628804 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:01.629104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:02.128841 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:02.128936 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:02.129299 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:02.629029 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:02.629126 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:02.629455 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:03.129199 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:03.129305 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:03.129646 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:03.129706 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:03.629451 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:03.629523 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:03.629841 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:04.129677 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:04.129766 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:04.130114 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:04.628842 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:04.628925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:04.629305 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:05.129074 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:05.129179 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:05.129561 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:05.629356 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:05.629434 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:05.629760 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:05.629824 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:06.129613 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:06.129693 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:06.130038 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:06.628772 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:06.628866 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:06.629198 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:07.128967 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:07.129056 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:07.129446 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:07.629172 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:07.629271 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:07.629622 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:08.129431 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:08.129524 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:08.129883 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:08.129948 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:08.629670 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:08.629754 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:08.630071 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:09.128820 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:09.128899 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:09.129287 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:09.629017 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:09.629101 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:09.629445 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:10.129193 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:10.129297 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:10.129649 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:10.629427 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:10.629501 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:10.629814 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:10.629890 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:11.129612 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:11.129692 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:11.129995 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:11.628703 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:11.628780 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:11.629047 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:12.128784 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:12.128867 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:12.129223 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:12.628955 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:12.629067 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:12.629416 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:13.129129 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:13.129206 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:13.129596 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:13.129670 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:13.629350 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:13.629433 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:13.629735 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:14.129533 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:14.129618 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:14.129952 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:14.628687 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:14.628782 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:14.629096 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:15.128811 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:15.128888 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:15.129232 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:15.628958 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:15.629043 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:15.629372 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:15.629444 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:16.129169 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:16.129269 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:16.129671 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:16.629474 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:16.629546 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:16.629863 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:17.129648 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:17.129733 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:17.130077 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:17.628801 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:17.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:17.629169 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:18.128883 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:18.128963 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:18.129324 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:18.129398 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:18.629048 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:18.629135 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:18.629454 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:19.129179 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:19.129268 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:19.129621 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:19.629351 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:19.629424 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:19.629708 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:20.129508 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:20.129585 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:20.129925 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:20.129994 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:20.628667 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:20.628737 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:20.629038 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:21.128739 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:21.128822 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:21.129169 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:21.628882 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:21.628954 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:21.629266 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:22.128994 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:22.129070 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:22.129426 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:22.629135 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:22.629221 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:22.629538 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:22.629601 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:23.129384 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:23.129466 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:23.129808 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:23.629595 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:23.629669 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:23.629984 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:24.128733 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:24.128814 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:24.129170 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:24.629511 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:24.629630 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:24.630004 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:24.630069 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:25.128773 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:25.128859 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:25.129232 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:25.629077 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:25.629159 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:25.629492 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:26.129299 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:26.129377 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:26.129704 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:26.629492 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:26.629562 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:26.629872 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:27.129668 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:27.129753 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:27.130132 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:27.130203 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:27.628888 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:27.628961 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:27.629299 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:28.129030 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:28.129106 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:28.129492 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:28.629210 1653676 node_ready.go:38] duration metric: took 6m0.000644351s for node "functional-699837" to be "Ready" ...
	I0804 09:01:28.630996 1653676 out.go:201] 
	W0804 09:01:28.631963 1653676 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W0804 09:01:28.631975 1653676 out.go:270] * 
	W0804 09:01:28.633557 1653676 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 09:01:28.634655 1653676 out.go:201] 
	
	
	==> Docker <==
	Aug 04 08:55:25 functional-699837 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Aug 04 08:55:25 functional-699837 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Aug 04 08:55:25 functional-699837 systemd[1]: cri-docker.service: Deactivated successfully.
	Aug 04 08:55:25 functional-699837 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Aug 04 08:55:25 functional-699837 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Start docker client with request timeout 0s"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Loaded network plugin cni"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Docker cri networking managed by network plugin cni"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Setting cgroupDriver cgroupfs"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Start cri-dockerd grpc backend"
	Aug 04 08:55:25 functional-699837 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Aug 04 08:55:26 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a670d9d90ef4b3f9c8a2229b07375783d2742e14cb8b08de1d1d609352b31ca9/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 08:55:26 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6196286ba923f262b934ea01e1a6c54ba05e38908d2ce0251696c08a8b6e4e4f/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 08:55:26 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/87c98d51b11aa2b27ab051d1a1e76c991403967dc4bbed5c8865a1c8839a006c/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 08:55:26 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4dc39892c792c69f93a9689deb4a22058aa932aaab9b5a2ef60fe93066740a6a/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 08:56:16 functional-699837 dockerd[7186]: time="2025-08-04T08:56:16.274092329Z" level=info msg="ignoring event" container=6a82f093dfdcc77dca8bafe4751718938b424c4cd13715b8c25f8c91d4094c87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 08:56:25 functional-699837 dockerd[7186]: time="2025-08-04T08:56:25.952124711Z" level=info msg="ignoring event" container=d11d953e110f7fac9239023c8f301d3ea182fcc19934837d8f119e7d945ae14a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 08:56:55 functional-699837 dockerd[7186]: time="2025-08-04T08:56:55.721506604Z" level=info msg="ignoring event" container=340fbe431c80ae67951d4d3de5dbda3a7af1fd7b79b5e3706e0b82c0e360bf2b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 08:59:24 functional-699837 dockerd[7186]: time="2025-08-04T08:59:24.457189004Z" level=info msg="ignoring event" container=a70a68ec61693decabdce1681f5a849ba6740bf7abf9db4339c54ccb1b99a359 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 08:59:32 functional-699837 dockerd[7186]: time="2025-08-04T08:59:32.204638673Z" level=info msg="ignoring event" container=2fafac7520c8d0e9a9ddb8e73ffb49294146ab4a5f8bce024822ab9f4fdcd5bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2fafac7520c8d       9ad783615e1bc       2 minutes ago       Exited              kube-controller-manager   6                   87c98d51b11aa       kube-controller-manager-functional-699837
	a70a68ec61693       d85eea91cc41d       2 minutes ago       Exited              kube-apiserver            6                   6196286ba923f       kube-apiserver-functional-699837
	340fbe431c80a       1e30c0b1e9b99       4 minutes ago       Exited              etcd                      6                   a670d9d90ef4b       etcd-functional-699837
	3206d43d6e58f       21d34a2aeacf5       5 minutes ago       Running             kube-scheduler            2                   4dc39892c792c       kube-scheduler-functional-699837
	0cb03d71b984f       21d34a2aeacf5       6 minutes ago       Exited              kube-scheduler            1                   cdae8372eae9d       kube-scheduler-functional-699837
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:01:31.710178    9529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:01:31.710682    9529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:01:31.712253    9529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:01:31.712673    9529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:01:31.714097    9529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000488] IPv4: martian source 10.244.0.33 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[  +0.000590] IPv4: martian source 10.244.0.33 from 10.244.0.7, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ee 17 d6 72 58 d4 08 06
	[ +20.425373] IPv4: martian source 10.244.0.36 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 2e 04 ae c5 a3 08 06
	[  +0.708699] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[Aug 4 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 4d a6 d6 4c 9f 08 06
	[Aug 4 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 38 7f 58 31 63 08 06
	[ +30.193533] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 b7 61 9c 47 84 08 06
	[Aug 4 08:45] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a d0 26 e8 7c d1 08 06
	[Aug 4 08:46] FS-Cache: Duplicate cookie detected
	[  +0.004807] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006832] FS-Cache: O-cookie d=000000003739c6e4{9P.session} n=000000001b482ea5
	[  +0.007607] FS-Cache: O-key=[10] '34333332323039333239'
	[  +0.005436] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006682] FS-Cache: N-cookie d=000000003739c6e4{9P.session} n=00000000e0b3994b
	[  +0.007609] FS-Cache: N-key=[10] '34333332323039333239'
	[  +5.882110] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 55 4a ac 47 cd 08 06
	
	
	==> etcd [340fbe431c80] <==
	flag provided but not defined: -proxy-refresh-interval
	Usage:
	
	  etcd [flags]
	    Start an etcd server.
	
	  etcd --version
	    Show the version of etcd.
	
	  etcd -h | --help
	    Show the help information about etcd.
	
	  etcd --config-file
	    Path to the server configuration file. Note that if a configuration file is provided, other command line flags and environment variables will be ignored.
	
	  etcd gateway
	    Run the stateless pass-through etcd TCP connection forwarding proxy.
	
	  etcd grpc-proxy
	    Run the stateless etcd v3 gRPC L7 reverse proxy.
	
	
	
	==> kernel <==
	 09:01:31 up 1 day, 17:43,  0 users,  load average: 0.01, 0.05, 0.34
	Linux functional-699837 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [a70a68ec6169] <==
	W0804 08:59:04.426148       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:04.426280       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 08:59:04.427463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 08:59:04.434192       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 08:59:04.440592       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 08:59:04.440613       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 08:59:04.440846       1 instance.go:232] Using reconciler: lease
	W0804 08:59:04.441668       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:04.441684       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:05.427410       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:05.427410       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:05.441981       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:07.008411       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:07.025679       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:07.166787       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:09.765027       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:09.806488       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:10.063522       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:13.932343       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:14.037582       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:14.089064       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:19.259004       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:19.470708       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:20.945736       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 08:59:24.442401       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [2fafac7520c8] <==
	I0804 08:59:11.887703       1 serving.go:386] Generated self-signed cert in-memory
	I0804 08:59:12.166874       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 08:59:12.166898       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 08:59:12.168293       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 08:59:12.168315       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 08:59:12.168600       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 08:59:12.168727       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 08:59:32.171192       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-scheduler [0cb03d71b984] <==
	
	
	==> kube-scheduler [3206d43d6e58] <==
	E0804 09:00:12.260216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 09:00:13.558952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:00:13.721571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:00:16.379946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 09:00:23.348524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:00:28.563885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:00:32.014424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:00:33.033677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 09:00:47.281529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:00:47.653383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 09:00:48.988484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 09:00:54.836226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 09:00:54.975251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:00:57.394600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:00:59.500812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:01:00.013055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 09:01:00.539902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:01:01.692270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 09:01:02.088398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:01:08.204402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 09:01:09.352314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:01:11.128294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:01:23.683836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:01:24.236788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 09:01:31.276535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	
	
	==> kubelet <==
	Aug 04 09:01:18 functional-699837 kubelet[4226]: E0804 09:01:18.142560    4226 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.185884435643239b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-699837 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 08:51:19.605724059 +0000 UTC m=+0.317011778,LastTimestamp:2025-08-04 08:51:19.605724059 +0000 UTC m=+0.317011778,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:01:18 functional-699837 kubelet[4226]: E0804 09:01:18.142667    4226 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{functional-699837.185884435643239b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-699837 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 08:51:19.605724059 +0000 UTC m=+0.317011778,LastTimestamp:2025-08-04 08:51:19.605724059 +0000 UTC m=+0.317011778,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:01:18 functional-699837 kubelet[4226]: E0804 09:01:18.142986    4226 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588443569dee4d  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 08:51:19.611674189 +0000 UTC m=+0.322961923,LastTimestamp:2025-08-04 08:51:19.611674189 +0000 UTC m=+0.322961923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:01:19 functional-699837 kubelet[4226]: E0804 09:01:19.656720    4226 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	Aug 04 09:01:21 functional-699837 kubelet[4226]: E0804 09:01:21.599078    4226 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:01:21 functional-699837 kubelet[4226]: I0804 09:01:21.599164    4226 scope.go:117] "RemoveContainer" containerID="340fbe431c80ae67951d4d3de5dbda3a7af1fd7b79b5e3706e0b82c0e360bf2b"
	Aug 04 09:01:21 functional-699837 kubelet[4226]: E0804 09:01:21.599350    4226 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=etcd pod=etcd-functional-699837_kube-system(33b890b5c0b95f8eaa124c566a17ff33)\"" pod="kube-system/etcd-functional-699837" podUID="33b890b5c0b95f8eaa124c566a17ff33"
	Aug 04 09:01:21 functional-699837 kubelet[4226]: E0804 09:01:21.602204    4226 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Aug 04 09:01:22 functional-699837 kubelet[4226]: E0804 09:01:22.598787    4226 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:01:22 functional-699837 kubelet[4226]: I0804 09:01:22.598874    4226 scope.go:117] "RemoveContainer" containerID="a70a68ec61693decabdce1681f5a849ba6740bf7abf9db4339c54ccb1b99a359"
	Aug 04 09:01:22 functional-699837 kubelet[4226]: E0804 09:01:22.599029    4226 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-functional-699837_kube-system(2b39e4280fdde7528fa65c33493b517b)\"" pod="kube-system/kube-apiserver-functional-699837" podUID="2b39e4280fdde7528fa65c33493b517b"
	Aug 04 09:01:23 functional-699837 kubelet[4226]: I0804 09:01:23.480767    4226 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:01:23 functional-699837 kubelet[4226]: E0804 09:01:23.481137    4226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:01:24 functional-699837 kubelet[4226]: E0804 09:01:24.396607    4226 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588443569dee4d  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 08:51:19.611674189 +0000 UTC m=+0.322961923,LastTimestamp:2025-08-04 08:51:19.611674189 +0000 UTC m=+0.322961923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:01:24 functional-699837 kubelet[4226]: E0804 09:01:24.466107    4226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:01:27 functional-699837 kubelet[4226]: E0804 09:01:27.706024    4226 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Aug 04 09:01:27 functional-699837 kubelet[4226]: E0804 09:01:27.936556    4226 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Aug 04 09:01:28 functional-699837 kubelet[4226]: E0804 09:01:28.598604    4226 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:01:29 functional-699837 kubelet[4226]: E0804 09:01:29.657833    4226 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	Aug 04 09:01:30 functional-699837 kubelet[4226]: I0804 09:01:30.482479    4226 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:01:30 functional-699837 kubelet[4226]: E0804 09:01:30.482883    4226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:01:31 functional-699837 kubelet[4226]: E0804 09:01:31.467464    4226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:01:31 functional-699837 kubelet[4226]: E0804 09:01:31.599251    4226 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:01:31 functional-699837 kubelet[4226]: I0804 09:01:31.599334    4226 scope.go:117] "RemoveContainer" containerID="2fafac7520c8d0e9a9ddb8e73ffb49294146ab4a5f8bce024822ab9f4fdcd5bd"
	Aug 04 09:01:31 functional-699837 kubelet[4226]: E0804 09:01:31.599476    4226 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-699837_kube-system(ed0b2fd0bf6ad62500e8494ab79d1a1a)\"" pod="kube-system/kube-controller-manager-functional-699837" podUID="ed0b2fd0bf6ad62500e8494ab79d1a1a"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837: exit status 2 (266.849387ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-699837" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/KubectlGetPods (1.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/MinikubeKubectlCmd (1.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 kubectl -- --context functional-699837 get pods
functional_test.go:733: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-699837 kubectl -- --context functional-699837 get pods: exit status 1 (96.803255ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:736: failed to get pods. args "out/minikube-linux-amd64 -p functional-699837 kubectl -- --context functional-699837 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-699837
helpers_test.go:235: (dbg) docker inspect functional-699837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	        "Created": "2025-08-04T08:46:45.45274172Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1645232,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T08:46:45.480784715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hosts",
	        "LogPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef-json.log",
	        "Name": "/functional-699837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-699837:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-699837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	                "LowerDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/merged",
	                "UpperDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/diff",
	                "WorkDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-699837",
	                "Source": "/var/lib/docker/volumes/functional-699837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-699837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-699837",
	                "name.minikube.sigs.k8s.io": "functional-699837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "28a81d3856c88da8c1d30d5c1cccd74ba2a899c3397b78caf0ac9da484142038",
	            "SandboxKey": "/var/run/docker/netns/28a81d3856c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-699837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:c5:9a:18:f2:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "763070d9e7bba0803db69bf71eb608d56921d0bfd4c71a1d39d0701f7372b87c",
	                    "EndpointID": "83493e8c17b59326d8c479c2c0d7a5ded2cae3362a881c1ce8347b3f751ead15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-699837",
	                        "c369b96e23d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837: exit status 2 (261.51723ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 logs -n 25
helpers_test.go:252: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-114794 image ls --format short --alsologtostderr                                                                                         │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image   │ functional-114794 image ls --format yaml --alsologtostderr                                                                                          │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ ssh     │ functional-114794 ssh pgrep buildkitd                                                                                                               │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ image   │ functional-114794 image build -t localhost/my-image:functional-114794 testdata/build --alsologtostderr                                              │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image   │ functional-114794 image ls --format json --alsologtostderr                                                                                          │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image   │ functional-114794 image ls --format table --alsologtostderr                                                                                         │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image   │ functional-114794 image ls                                                                                                                          │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ delete  │ -p functional-114794                                                                                                                                │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ start   │ -p functional-699837 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ start   │ -p functional-699837 --alsologtostderr -v=8                                                                                                         │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 08:55 UTC │                     │
	│ cache   │ functional-699837 cache add registry.k8s.io/pause:3.1                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ functional-699837 cache add registry.k8s.io/pause:3.3                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ functional-699837 cache add registry.k8s.io/pause:latest                                                                                            │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ functional-699837 cache add minikube-local-cache-test:functional-699837                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ functional-699837 cache delete minikube-local-cache-test:functional-699837                                                                          │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                    │ minikube          │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ list                                                                                                                                                │ minikube          │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ ssh     │ functional-699837 ssh sudo crictl images                                                                                                            │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ ssh     │ functional-699837 ssh sudo docker rmi registry.k8s.io/pause:latest                                                                                  │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ ssh     │ functional-699837 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │                     │
	│ cache   │ functional-699837 cache reload                                                                                                                      │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ ssh     │ functional-699837 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                    │ minikube          │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                 │ minikube          │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ kubectl │ functional-699837 kubectl -- --context functional-699837 get pods                                                                                   │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 08:55:20
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 08:55:20.770600 1653676 out.go:345] Setting OutFile to fd 1 ...
	I0804 08:55:20.770872 1653676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:55:20.770883 1653676 out.go:358] Setting ErrFile to fd 2...
	I0804 08:55:20.770890 1653676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:55:20.771067 1653676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 08:55:20.771644 1653676 out.go:352] Setting JSON to false
	I0804 08:55:20.772653 1653676 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":149810,"bootTime":1754147911,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 08:55:20.772739 1653676 start.go:140] virtualization: kvm guest
	I0804 08:55:20.774597 1653676 out.go:177] * [functional-699837] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 08:55:20.775675 1653676 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 08:55:20.775678 1653676 notify.go:220] Checking for updates...
	I0804 08:55:20.776705 1653676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 08:55:20.777818 1653676 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:20.778845 1653676 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 08:55:20.779811 1653676 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 08:55:20.780885 1653676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 08:55:20.782127 1653676 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 08:55:20.782240 1653676 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 08:55:20.804704 1653676 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 08:55:20.804841 1653676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 08:55:20.850605 1653676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 08:55:20.841828701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 08:55:20.850698 1653676 docker.go:318] overlay module found
	I0804 08:55:20.852305 1653676 out.go:177] * Using the docker driver based on existing profile
	I0804 08:55:20.853166 1653676 start.go:304] selected driver: docker
	I0804 08:55:20.853179 1653676 start.go:918] validating driver "docker" against &{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 08:55:20.853275 1653676 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 08:55:20.853364 1653676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 08:55:20.899900 1653676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 08:55:20.891412564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 08:55:20.900590 1653676 cni.go:84] Creating CNI manager for ""
	I0804 08:55:20.900687 1653676 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 08:55:20.900743 1653676 start.go:348] cluster config:
	{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 08:55:20.902216 1653676 out.go:177] * Starting "functional-699837" primary control-plane node in "functional-699837" cluster
	I0804 08:55:20.903155 1653676 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 08:55:20.904009 1653676 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 08:55:20.904940 1653676 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 08:55:20.904978 1653676 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 08:55:20.904991 1653676 cache.go:56] Caching tarball of preloaded images
	I0804 08:55:20.905036 1653676 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 08:55:20.905069 1653676 preload.go:172] Found /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 08:55:20.905079 1653676 cache.go:59] Finished verifying existence of preloaded tar for v1.34.0-beta.0 on docker
	I0804 08:55:20.905203 1653676 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/config.json ...
	I0804 08:55:20.923511 1653676 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 08:55:20.923529 1653676 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 08:55:20.923544 1653676 cache.go:230] Successfully downloaded all kic artifacts
	I0804 08:55:20.923577 1653676 start.go:360] acquireMachinesLock for functional-699837: {Name:mkeddb8e244284f14cfc07327f464823de65cf67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 08:55:20.923631 1653676 start.go:364] duration metric: took 36.633µs to acquireMachinesLock for "functional-699837"
	I0804 08:55:20.923647 1653676 start.go:96] Skipping create...Using existing machine configuration
	I0804 08:55:20.923652 1653676 fix.go:54] fixHost starting: 
	I0804 08:55:20.923842 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:20.940410 1653676 fix.go:112] recreateIfNeeded on functional-699837: state=Running err=<nil>
	W0804 08:55:20.940440 1653676 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 08:55:20.942107 1653676 out.go:177] * Updating the running docker "functional-699837" container ...
	I0804 08:55:20.943161 1653676 machine.go:93] provisionDockerMachine start ...
	I0804 08:55:20.943249 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:20.959620 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:20.959871 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:20.959884 1653676 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 08:55:21.080396 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-699837
	
	I0804 08:55:21.080433 1653676 ubuntu.go:169] provisioning hostname "functional-699837"
	I0804 08:55:21.080500 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.097426 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.097649 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.097666 1653676 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-699837 && echo "functional-699837" | sudo tee /etc/hostname
	I0804 08:55:21.227825 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-699837
	
	I0804 08:55:21.227926 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.246066 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.246278 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.246294 1653676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-699837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-699837/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-699837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 08:55:21.373154 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 08:55:21.373185 1653676 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 08:55:21.373228 1653676 ubuntu.go:177] setting up certificates
	I0804 08:55:21.373273 1653676 provision.go:84] configureAuth start
	I0804 08:55:21.373335 1653676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-699837
	I0804 08:55:21.390471 1653676 provision.go:143] copyHostCerts
	I0804 08:55:21.390507 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 08:55:21.390548 1653676 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 08:55:21.390558 1653676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 08:55:21.390632 1653676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 08:55:21.390734 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 08:55:21.390760 1653676 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 08:55:21.390767 1653676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 08:55:21.390803 1653676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 08:55:21.390876 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 08:55:21.390902 1653676 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 08:55:21.390914 1653676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 08:55:21.390947 1653676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 08:55:21.391030 1653676 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.functional-699837 san=[127.0.0.1 192.168.49.2 functional-699837 localhost minikube]
	I0804 08:55:21.573518 1653676 provision.go:177] copyRemoteCerts
	I0804 08:55:21.573582 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 08:55:21.573618 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.591269 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:21.681513 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 08:55:21.681585 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 08:55:21.702708 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 08:55:21.702758 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 08:55:21.723583 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 08:55:21.723630 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 08:55:21.744569 1653676 provision.go:87] duration metric: took 371.27679ms to configureAuth
	I0804 08:55:21.744602 1653676 ubuntu.go:193] setting minikube options for container-runtime
	I0804 08:55:21.744799 1653676 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 08:55:21.744861 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.762017 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.762244 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.762255 1653676 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 08:55:21.889470 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 08:55:21.889494 1653676 ubuntu.go:71] root file system type: overlay
	I0804 08:55:21.889614 1653676 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 08:55:21.889686 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.906485 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.906734 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.906827 1653676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 08:55:22.043972 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 08:55:22.044042 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.061528 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:22.061801 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:22.061820 1653676 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 08:55:22.189999 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 08:55:22.190024 1653676 machine.go:96] duration metric: took 1.246850112s to provisionDockerMachine
	I0804 08:55:22.190035 1653676 start.go:293] postStartSetup for "functional-699837" (driver="docker")
	I0804 08:55:22.190046 1653676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 08:55:22.190105 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 08:55:22.190157 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.207121 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.297799 1653676 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 08:55:22.300559 1653676 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.5 LTS"
	I0804 08:55:22.300580 1653676 command_runner.go:130] > NAME="Ubuntu"
	I0804 08:55:22.300588 1653676 command_runner.go:130] > VERSION_ID="22.04"
	I0804 08:55:22.300596 1653676 command_runner.go:130] > VERSION="22.04.5 LTS (Jammy Jellyfish)"
	I0804 08:55:22.300602 1653676 command_runner.go:130] > VERSION_CODENAME=jammy
	I0804 08:55:22.300608 1653676 command_runner.go:130] > ID=ubuntu
	I0804 08:55:22.300614 1653676 command_runner.go:130] > ID_LIKE=debian
	I0804 08:55:22.300622 1653676 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0804 08:55:22.300634 1653676 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0804 08:55:22.300652 1653676 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0804 08:55:22.300662 1653676 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0804 08:55:22.300667 1653676 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0804 08:55:22.300719 1653676 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 08:55:22.300753 1653676 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 08:55:22.300768 1653676 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 08:55:22.300780 1653676 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 08:55:22.300795 1653676 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 08:55:22.300857 1653676 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 08:55:22.300964 1653676 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 08:55:22.300977 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> /etc/ssl/certs/15826902.pem
	I0804 08:55:22.301064 1653676 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts -> hosts in /etc/test/nested/copy/1582690
	I0804 08:55:22.301073 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts -> /etc/test/nested/copy/1582690/hosts
	I0804 08:55:22.301115 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1582690
	I0804 08:55:22.308734 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 08:55:22.329778 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts --> /etc/test/nested/copy/1582690/hosts (40 bytes)
	I0804 08:55:22.350435 1653676 start.go:296] duration metric: took 160.385758ms for postStartSetup
	I0804 08:55:22.350534 1653676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 08:55:22.350588 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.367129 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.453443 1653676 command_runner.go:130] > 33%
	I0804 08:55:22.453718 1653676 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 08:55:22.457863 1653676 command_runner.go:130] > 197G
	I0804 08:55:22.457888 1653676 fix.go:56] duration metric: took 1.534232726s for fixHost
	I0804 08:55:22.457898 1653676 start.go:83] releasing machines lock for "functional-699837", held for 1.534258328s
	I0804 08:55:22.457964 1653676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-699837
	I0804 08:55:22.474710 1653676 ssh_runner.go:195] Run: cat /version.json
	I0804 08:55:22.474768 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.474834 1653676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 08:55:22.474905 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.492489 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.492983 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.576302 1653676 command_runner.go:130] > {"iso_version": "v1.36.0-1753487480-21147", "kicbase_version": "v0.0.47-1753871403-21198", "minikube_version": "v1.36.0", "commit": "69470231e9abd2d11a84a83b271e426458d5d12f"}
	I0804 08:55:22.576422 1653676 ssh_runner.go:195] Run: systemctl --version
	I0804 08:55:22.653754 1653676 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0804 08:55:22.655827 1653676 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.16)
	I0804 08:55:22.655870 1653676 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0804 08:55:22.655949 1653676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 08:55:22.659872 1653676 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0804 08:55:22.659895 1653676 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I0804 08:55:22.659905 1653676 command_runner.go:130] > Device: 37h/55d	Inode: 822247      Links: 1
	I0804 08:55:22.659914 1653676 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0804 08:55:22.659929 1653676 command_runner.go:130] > Access: 2025-08-04 08:46:48.521872821 +0000
	I0804 08:55:22.659937 1653676 command_runner.go:130] > Modify: 2025-08-04 08:46:48.497871149 +0000
	I0804 08:55:22.659947 1653676 command_runner.go:130] > Change: 2025-08-04 08:46:48.497871149 +0000
	I0804 08:55:22.659959 1653676 command_runner.go:130] >  Birth: 2025-08-04 08:46:48.497871149 +0000
	I0804 08:55:22.660164 1653676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 08:55:22.676431 1653676 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 08:55:22.676489 1653676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 08:55:22.683904 1653676 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 08:55:22.683925 1653676 start.go:495] detecting cgroup driver to use...
	I0804 08:55:22.683957 1653676 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 08:55:22.684079 1653676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 08:55:22.696848 1653676 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0804 08:55:22.698010 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:23.084233 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 08:55:23.094208 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 08:55:23.103030 1653676 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 08:55:23.103076 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 08:55:23.111645 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 08:55:23.120216 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 08:55:23.128524 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 08:55:23.137020 1653676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 08:55:23.144932 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 08:55:23.153318 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 08:55:23.161730 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 08:55:23.170124 1653676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 08:55:23.176419 1653676 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0804 08:55:23.177058 1653676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 08:55:23.184211 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:23.265466 1653676 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 08:55:23.467281 1653676 start.go:495] detecting cgroup driver to use...
	I0804 08:55:23.467337 1653676 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 08:55:23.467388 1653676 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 08:55:23.477772 1653676 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0804 08:55:23.477865 1653676 command_runner.go:130] > [Unit]
	I0804 08:55:23.477892 1653676 command_runner.go:130] > Description=Docker Application Container Engine
	I0804 08:55:23.477904 1653676 command_runner.go:130] > Documentation=https://docs.docker.com
	I0804 08:55:23.477912 1653676 command_runner.go:130] > BindsTo=containerd.service
	I0804 08:55:23.477924 1653676 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0804 08:55:23.477935 1653676 command_runner.go:130] > Wants=network-online.target
	I0804 08:55:23.477942 1653676 command_runner.go:130] > Requires=docker.socket
	I0804 08:55:23.477950 1653676 command_runner.go:130] > StartLimitBurst=3
	I0804 08:55:23.477958 1653676 command_runner.go:130] > StartLimitIntervalSec=60
	I0804 08:55:23.477963 1653676 command_runner.go:130] > [Service]
	I0804 08:55:23.477971 1653676 command_runner.go:130] > Type=notify
	I0804 08:55:23.477977 1653676 command_runner.go:130] > Restart=on-failure
	I0804 08:55:23.477992 1653676 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0804 08:55:23.478010 1653676 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0804 08:55:23.478023 1653676 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0804 08:55:23.478048 1653676 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0804 08:55:23.478062 1653676 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0804 08:55:23.478073 1653676 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0804 08:55:23.478088 1653676 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0804 08:55:23.478104 1653676 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0804 08:55:23.478125 1653676 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0804 08:55:23.478140 1653676 command_runner.go:130] > ExecStart=
	I0804 08:55:23.478162 1653676 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0804 08:55:23.478451 1653676 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0804 08:55:23.478489 1653676 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0804 08:55:23.478505 1653676 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0804 08:55:23.478520 1653676 command_runner.go:130] > LimitNOFILE=infinity
	I0804 08:55:23.478529 1653676 command_runner.go:130] > LimitNPROC=infinity
	I0804 08:55:23.478536 1653676 command_runner.go:130] > LimitCORE=infinity
	I0804 08:55:23.478544 1653676 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0804 08:55:23.478559 1653676 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0804 08:55:23.478570 1653676 command_runner.go:130] > TasksMax=infinity
	I0804 08:55:23.478576 1653676 command_runner.go:130] > TimeoutStartSec=0
	I0804 08:55:23.478586 1653676 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0804 08:55:23.478592 1653676 command_runner.go:130] > Delegate=yes
	I0804 08:55:23.478606 1653676 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0804 08:55:23.478612 1653676 command_runner.go:130] > KillMode=process
	I0804 08:55:23.478659 1653676 command_runner.go:130] > [Install]
	I0804 08:55:23.478680 1653676 command_runner.go:130] > WantedBy=multi-user.target
	I0804 08:55:23.480586 1653676 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 08:55:23.480654 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 08:55:23.491375 1653676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 08:55:23.505761 1653676 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0804 08:55:23.506806 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:23.923432 1653676 ssh_runner.go:195] Run: which cri-dockerd
	I0804 08:55:23.926961 1653676 command_runner.go:130] > /usr/bin/cri-dockerd
	I0804 08:55:23.927156 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 08:55:23.935149 1653676 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 08:55:23.950832 1653676 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 08:55:24.042992 1653676 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 08:55:24.297851 1653676 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 08:55:24.297998 1653676 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 08:55:24.377001 1653676 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 08:55:24.388783 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:24.510366 1653676 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 08:55:24.982429 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 08:55:24.992600 1653676 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0804 08:55:25.006985 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 08:55:25.016432 1653676 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 08:55:25.099651 1653676 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 08:55:25.175485 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:25.251241 1653676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 08:55:25.263161 1653676 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 08:55:25.272497 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:25.348098 1653676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 08:55:25.408736 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 08:55:25.419584 1653676 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 08:55:25.419655 1653676 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 08:55:25.422672 1653676 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0804 08:55:25.422693 1653676 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0804 08:55:25.422702 1653676 command_runner.go:130] > Device: 45h/69d	Inode: 1258        Links: 1
	I0804 08:55:25.422711 1653676 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0804 08:55:25.422722 1653676 command_runner.go:130] > Access: 2025-08-04 08:55:25.353889433 +0000
	I0804 08:55:25.422730 1653676 command_runner.go:130] > Modify: 2025-08-04 08:55:25.353889433 +0000
	I0804 08:55:25.422743 1653676 command_runner.go:130] > Change: 2025-08-04 08:55:25.357889711 +0000
	I0804 08:55:25.422749 1653676 command_runner.go:130] >  Birth: -
	I0804 08:55:25.422776 1653676 start.go:563] Will wait 60s for crictl version
	I0804 08:55:25.422814 1653676 ssh_runner.go:195] Run: which crictl
	I0804 08:55:25.425611 1653676 command_runner.go:130] > /usr/bin/crictl
	I0804 08:55:25.425730 1653676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 08:55:25.455697 1653676 command_runner.go:130] > Version:  0.1.0
	I0804 08:55:25.455721 1653676 command_runner.go:130] > RuntimeName:  docker
	I0804 08:55:25.455727 1653676 command_runner.go:130] > RuntimeVersion:  28.3.3
	I0804 08:55:25.455733 1653676 command_runner.go:130] > RuntimeApiVersion:  v1
	I0804 08:55:25.458002 1653676 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 08:55:25.458069 1653676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 08:55:25.480067 1653676 command_runner.go:130] > 28.3.3
	I0804 08:55:25.481564 1653676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 08:55:25.502625 1653676 command_runner.go:130] > 28.3.3
	I0804 08:55:25.506722 1653676 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 08:55:25.506807 1653676 cli_runner.go:164] Run: docker network inspect functional-699837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 08:55:25.523376 1653676 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0804 08:55:25.526929 1653676 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I0804 08:55:25.527043 1653676 kubeadm.go:875] updating cluster {Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 08:55:25.527223 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:25.922076 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:26.309911 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:26.726305 1653676 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 08:55:26.726461 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:27.101061 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:27.477147 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:27.859614 1653676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 08:55:27.878541 1653676 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	I0804 08:55:27.878563 1653676 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	I0804 08:55:27.878570 1653676 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	I0804 08:55:27.878580 1653676 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.34.0-beta.0
	I0804 08:55:27.878585 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.6.1-1
	I0804 08:55:27.878590 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.5.21-0
	I0804 08:55:27.878595 1653676 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.12.1
	I0804 08:55:27.878599 1653676 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0804 08:55:27.878603 1653676 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 08:55:27.879821 1653676 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 08:55:27.879847 1653676 docker.go:633] Images already preloaded, skipping extraction
	I0804 08:55:27.879906 1653676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 08:55:27.898058 1653676 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	I0804 08:55:27.898084 1653676 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	I0804 08:55:27.898091 1653676 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	I0804 08:55:27.898095 1653676 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.34.0-beta.0
	I0804 08:55:27.898099 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.6.1-1
	I0804 08:55:27.898103 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.5.21-0
	I0804 08:55:27.898109 1653676 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.12.1
	I0804 08:55:27.898113 1653676 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0804 08:55:27.898117 1653676 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 08:55:27.898143 1653676 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 08:55:27.898167 1653676 cache_images.go:85] Images are preloaded, skipping loading
	I0804 08:55:27.898180 1653676 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0-beta.0 docker true true} ...
	I0804 08:55:27.898290 1653676 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-699837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 08:55:27.898340 1653676 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 08:55:27.944494 1653676 command_runner.go:130] > cgroupfs
	I0804 08:55:27.946023 1653676 cni.go:84] Creating CNI manager for ""
	I0804 08:55:27.946045 1653676 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 08:55:27.946061 1653676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 08:55:27.946082 1653676 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-699837 NodeName:functional-699837 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 08:55:27.946247 1653676 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-699837"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 08:55:27.946320 1653676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 08:55:27.953892 1653676 command_runner.go:130] > kubeadm
	I0804 08:55:27.953910 1653676 command_runner.go:130] > kubectl
	I0804 08:55:27.953915 1653676 command_runner.go:130] > kubelet
	I0804 08:55:27.954677 1653676 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 08:55:27.954730 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 08:55:27.962553 1653676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0804 08:55:27.978365 1653676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 08:55:27.994068 1653676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0804 08:55:28.009976 1653676 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0804 08:55:28.013276 1653676 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0804 08:55:28.013353 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:28.101449 1653676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 08:55:28.112250 1653676 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837 for IP: 192.168.49.2
	I0804 08:55:28.112270 1653676 certs.go:194] generating shared ca certs ...
	I0804 08:55:28.112291 1653676 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.112464 1653676 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 08:55:28.112506 1653676 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 08:55:28.112516 1653676 certs.go:256] generating profile certs ...
	I0804 08:55:28.112631 1653676 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.key
	I0804 08:55:28.112686 1653676 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key.5971bdc2
	I0804 08:55:28.112722 1653676 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key
	I0804 08:55:28.112733 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 08:55:28.112747 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 08:55:28.112759 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 08:55:28.112772 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 08:55:28.112783 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 08:55:28.112795 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 08:55:28.112808 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 08:55:28.112819 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 08:55:28.112866 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 08:55:28.112898 1653676 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 08:55:28.112907 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 08:55:28.112929 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 08:55:28.112954 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 08:55:28.112975 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 08:55:28.113011 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 08:55:28.113036 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.113051 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.113068 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem -> /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.113660 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 08:55:28.135009 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 08:55:28.155784 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 08:55:28.176520 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 08:55:28.197558 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 08:55:28.218349 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 08:55:28.239391 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 08:55:28.259973 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 08:55:28.280899 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 08:55:28.301872 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 08:55:28.322816 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 08:55:28.343561 1653676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 08:55:28.359122 1653676 ssh_runner.go:195] Run: openssl version
	I0804 08:55:28.363884 1653676 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0804 08:55:28.364128 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 08:55:28.372266 1653676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.375320 1653676 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.375365 1653676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.375402 1653676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.381281 1653676 command_runner.go:130] > b5213941
	I0804 08:55:28.381530 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 08:55:28.388997 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 08:55:28.397048 1653676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.399946 1653676 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.399991 1653676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.400016 1653676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.406052 1653676 command_runner.go:130] > 51391683
	I0804 08:55:28.406304 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 08:55:28.413987 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 08:55:28.422286 1653676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.425317 1653676 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.425349 1653676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.425376 1653676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.431562 1653676 command_runner.go:130] > 3ec20f2e
	I0804 08:55:28.431844 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 08:55:28.439543 1653676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 08:55:28.442556 1653676 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 08:55:28.442581 1653676 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0804 08:55:28.442590 1653676 command_runner.go:130] > Device: 801h/2049d	Inode: 822354      Links: 1
	I0804 08:55:28.442597 1653676 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0804 08:55:28.442603 1653676 command_runner.go:130] > Access: 2025-08-04 08:51:18.188665144 +0000
	I0804 08:55:28.442607 1653676 command_runner.go:130] > Modify: 2025-08-04 08:47:12.683556584 +0000
	I0804 08:55:28.442614 1653676 command_runner.go:130] > Change: 2025-08-04 08:47:12.683556584 +0000
	I0804 08:55:28.442619 1653676 command_runner.go:130] >  Birth: 2025-08-04 08:47:12.683556584 +0000
	I0804 08:55:28.442691 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 08:55:28.448546 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.448806 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 08:55:28.454608 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.454889 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 08:55:28.460580 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.460805 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 08:55:28.466615 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.466839 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 08:55:28.472661 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.472705 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 08:55:28.478445 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.478508 1653676 kubeadm.go:392] StartCluster: {Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 08:55:28.478619 1653676 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 08:55:28.496419 1653676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 08:55:28.503804 1653676 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0804 08:55:28.503825 1653676 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0804 08:55:28.503833 1653676 command_runner.go:130] > /var/lib/minikube/etcd:
	I0804 08:55:28.504531 1653676 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 08:55:28.504546 1653676 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0804 08:55:28.504584 1653676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 08:55:28.511980 1653676 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 08:55:28.512384 1653676 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-699837" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.512513 1653676 kubeconfig.go:62] /home/jenkins/minikube-integration/21223-1578987/kubeconfig needs updating (will repair): [kubeconfig missing "functional-699837" cluster setting kubeconfig missing "functional-699837" context setting]
	I0804 08:55:28.512791 1653676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.513199 1653676 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.513384 1653676 kapi.go:59] client config for functional-699837: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt", KeyFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.key", CAFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2595680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0804 08:55:28.513811 1653676 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0804 08:55:28.513826 1653676 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0804 08:55:28.513833 1653676 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0804 08:55:28.513839 1653676 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0804 08:55:28.513844 1653676 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0804 08:55:28.513876 1653676 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0804 08:55:28.514257 1653676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 08:55:28.521605 1653676 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0804 08:55:28.521634 1653676 kubeadm.go:593] duration metric: took 17.082556ms to restartPrimaryControlPlane
	I0804 08:55:28.521645 1653676 kubeadm.go:394] duration metric: took 43.142663ms to StartCluster
	I0804 08:55:28.521666 1653676 settings.go:142] acquiring lock: {Name:mk3d97f9903fe59355ed92bb92489c9b9834574a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.521736 1653676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.522230 1653676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.522435 1653676 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 08:55:28.522512 1653676 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 08:55:28.522651 1653676 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 08:55:28.522656 1653676 addons.go:69] Setting storage-provisioner=true in profile "functional-699837"
	I0804 08:55:28.522728 1653676 addons.go:238] Setting addon storage-provisioner=true in "functional-699837"
	I0804 08:55:28.522681 1653676 addons.go:69] Setting default-storageclass=true in profile "functional-699837"
	I0804 08:55:28.522800 1653676 host.go:66] Checking if "functional-699837" exists ...
	I0804 08:55:28.522810 1653676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-699837"
	I0804 08:55:28.523050 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:28.523236 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:28.524415 1653676 out.go:177] * Verifying Kubernetes components...
	I0804 08:55:28.525459 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:28.542729 1653676 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.542941 1653676 kapi.go:59] client config for functional-699837: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt", KeyFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.key", CAFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2595680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0804 08:55:28.543225 1653676 addons.go:238] Setting addon default-storageclass=true in "functional-699837"
	I0804 08:55:28.543255 1653676 host.go:66] Checking if "functional-699837" exists ...
	I0804 08:55:28.543552 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:28.543853 1653676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 08:55:28.545053 1653676 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:28.545072 1653676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 08:55:28.545126 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:28.560950 1653676 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:28.560976 1653676 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 08:55:28.561028 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:28.561396 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:28.582841 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:28.617980 1653676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 08:55:28.628515 1653676 node_ready.go:35] waiting up to 6m0s for node "functional-699837" to be "Ready" ...
	I0804 08:55:28.628655 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:28.628715 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:28.628984 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:28.669259 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:28.681042 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:28.723292 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:28.723334 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.723359 1653676 retry.go:31] will retry after 184.647945ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.732373 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:28.732422 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.732443 1653676 retry.go:31] will retry after 304.201438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.908717 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:28.958881 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:28.958925 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.958945 1653676 retry.go:31] will retry after 476.117899ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.037179 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:29.088413 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:29.088468 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.088491 1653676 retry.go:31] will retry after 197.264107ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.129639 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:29.129716 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:29.130032 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:29.286304 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:29.334473 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:29.337029 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.337065 1653676 retry.go:31] will retry after 823.238005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.435237 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:29.482679 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:29.485403 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.485436 1653676 retry.go:31] will retry after 800.644745ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.629726 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:29.629799 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:29.630104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:30.128837 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:30.128917 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:30.129285 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:30.161434 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:30.213167 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.213231 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.213275 1653676 retry.go:31] will retry after 656.353253ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.286342 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:30.334470 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.336981 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.337012 1653676 retry.go:31] will retry after 508.253019ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.629489 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:30.629586 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:30.629950 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:30.630017 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:30.845486 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:30.869953 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:30.897779 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.897836 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.897862 1653676 retry.go:31] will retry after 1.094600532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.922225 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.922291 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.922314 1653676 retry.go:31] will retry after 805.303636ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:31.129681 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:31.129760 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:31.130110 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:31.628691 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:31.628775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:31.629122 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:31.728325 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:31.779677 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:31.779728 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:31.779748 1653676 retry.go:31] will retry after 2.236258385s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:31.993064 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:32.044458 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:32.044511 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:32.044552 1653676 retry.go:31] will retry after 1.503507165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:32.129706 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:32.129775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:32.130079 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:32.629732 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:32.629813 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:32.630171 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:32.630256 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:33.128768 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:33.128853 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:33.129210 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:33.548844 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:33.599998 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:33.600058 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:33.600081 1653676 retry.go:31] will retry after 1.994543648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:33.629251 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:33.629339 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:33.629634 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:34.017206 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:34.068508 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:34.068573 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:34.068597 1653676 retry.go:31] will retry after 3.823609715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:34.128678 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:34.128751 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:34.129067 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:34.629688 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:34.629764 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:34.630098 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:35.129721 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:35.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:35.130115 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:35.130189 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:35.595749 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:35.629120 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:35.629209 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:35.629582 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:35.645323 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:35.647845 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:35.647880 1653676 retry.go:31] will retry after 3.559085278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:36.129701 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:36.129780 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:36.130117 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:36.628869 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:36.628953 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:36.629336 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:37.129085 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:37.129171 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:37.129515 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:37.629335 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:37.629411 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:37.629704 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:37.629765 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:37.893118 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:37.941760 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:37.944423 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:37.944452 1653676 retry.go:31] will retry after 4.996473933s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:38.128782 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:38.128878 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:38.129260 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:38.628699 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:38.628786 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:38.629112 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:39.128699 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:39.128786 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:39.129139 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:39.207320 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:39.257569 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:39.257615 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:39.257640 1653676 retry.go:31] will retry after 8.124151658s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:39.629122 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:39.629208 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:39.629537 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:40.129218 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:40.129325 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:40.129628 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:40.129693 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:40.629297 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:40.629368 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:40.629673 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:41.129406 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:41.129495 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:41.129887 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:41.629498 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:41.629579 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:41.629928 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:42.129549 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:42.129645 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:42.130002 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:42.130063 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:42.629629 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:42.629709 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:42.630062 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:42.941490 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:42.990741 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:42.993232 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:42.993279 1653676 retry.go:31] will retry after 4.825851231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:43.129602 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:43.129690 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:43.130065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:43.628834 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:43.628909 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:43.629270 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:44.129025 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:44.129120 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:44.129526 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:44.629359 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:44.629431 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:44.629737 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:44.629803 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:45.129549 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:45.129627 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:45.129961 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:45.628704 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:45.628789 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:45.629130 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:46.128858 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:46.128936 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:46.129295 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:46.629013 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:46.629096 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:46.629444 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:47.129179 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:47.129266 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:47.129609 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:47.129674 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:47.381978 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:47.430195 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:47.433093 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:47.433123 1653676 retry.go:31] will retry after 10.012002454s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:47.629500 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:47.629573 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:47.629910 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:47.820313 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:47.870430 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:47.870476 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:47.870493 1653676 retry.go:31] will retry after 10.075489679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:48.128804 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:48.128895 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:48.129267 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:48.629030 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:48.629141 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:48.629503 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:49.129320 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:49.129409 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:49.129785 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:49.129864 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:49.629600 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:49.629674 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:49.629992 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:50.128745 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:50.128835 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:50.129191 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:50.628937 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:50.629015 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:50.629395 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:51.128731 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:51.128818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:51.129169 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:51.628936 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:51.629009 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:51.629384 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:51.629473 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:52.129137 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:52.129221 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:52.129575 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:52.629361 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:52.629431 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:52.629735 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:53.129540 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:53.129620 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:53.129949 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:53.628671 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:53.628747 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:53.629071 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:54.128801 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:54.128899 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:54.129261 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:54.129334 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:54.629005 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:54.629105 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:54.629481 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:55.129371 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:55.129447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:55.129804 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:55.629597 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:55.629674 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:55.630007 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:56.128707 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:56.128802 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:57.445382 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:57.946208 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:56:06.129570 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10000
	W0804 08:56:06.129644 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:56:06.129694 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:06.129736 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:16.130254 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10000
	W0804 08:56:16.130338 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:56:16.130408 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:16.130480 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:16.262782 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=132
	I0804 08:56:17.263910 1653676 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8441/api/v1/nodes/functional-699837"
	I0804 08:56:17.264149 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:17.264472 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:17.264610 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:17.264716 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:17.264973 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:17.267370 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38248->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267420 1653676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (19.822003727s)
	W0804 08:56:17.267450 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38248->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267470 1653676 retry.go:31] will retry after 18.146841122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38248->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267784 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38252->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267815 1653676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (19.321577292s)
	W0804 08:56:17.267836 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38252->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267852 1653676 retry.go:31] will retry after 19.077492147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38252->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.629331 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:17.629410 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:17.629777 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:18.129400 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:18.129489 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:18.129796 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:18.629536 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:18.629618 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:18.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:18.630021 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:19.129659 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:19.129746 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:19.130112 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:19.628758 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:19.628835 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:19.629178 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:20.128732 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:20.128806 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:20.129156 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:20.628674 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:20.628755 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:20.629081 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:21.128792 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:21.128867 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:21.129234 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:21.129324 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:21.629020 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:21.629101 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:21.629489 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:22.129299 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:22.129389 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:22.129751 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:22.629584 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:22.629664 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:22.629996 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:23.128722 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:23.128828 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:23.129192 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:23.628966 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:23.629055 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:23.629374 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:23.629437 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:24.129128 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:24.129225 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:24.129600 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:24.629381 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:24.629467 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:24.629838 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:25.129635 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:25.129755 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:25.130108 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:25.628815 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:25.628905 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:25.629282 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:26.128941 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:26.129024 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:26.129386 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:26.129469 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:26.629153 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:26.629266 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:26.629626 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:27.129444 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:27.129526 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:27.129867 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:27.629658 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:27.629737 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:27.630140 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:28.128857 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:28.128947 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:28.129307 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:28.629734 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:28.629837 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:28.630240 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:28.630338 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:29.129055 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:29.129168 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:29.129536 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:29.629363 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:29.629443 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:29.629791 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:30.129636 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:30.129710 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:30.130048 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:30.628774 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:30.628849 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:30.629212 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:31.128887 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:31.128984 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:31.129358 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:31.129426 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:31.629089 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:31.629164 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:31.629502 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:32.129335 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:32.129440 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:32.129852 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:32.629638 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:32.629720 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:32.630056 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:33.128794 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:33.128882 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:33.129261 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:33.628999 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:33.629072 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:33.629432 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:33.629497 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:34.129184 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:34.129308 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:34.129684 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:34.629474 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:34.629546 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:34.629872 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:35.129661 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:35.129748 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:35.130119 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:35.414447 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:56:35.463330 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:35.466231 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:35.466267 1653676 retry.go:31] will retry after 13.873476046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:35.629483 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:35.629558 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:35.629897 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:35.629960 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:36.129639 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:36.129713 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:36.130046 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:36.346375 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:56:36.394439 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:36.396962 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:36.396996 1653676 retry.go:31] will retry after 20.764306788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:36.629373 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:36.629461 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:36.629797 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:37.129619 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:37.129700 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:37.130049 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:37.628786 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:37.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:37.629214 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:38.129001 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:38.129075 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:38.129435 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:38.129504 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:38.629094 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:38.629186 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:38.629537 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:39.129329 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:39.129403 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:39.129733 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:39.629535 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:39.629607 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:39.629940 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:40.129719 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:40.129801 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:40.130145 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:40.130216 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:40.628884 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:40.628964 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:40.629317 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:41.128956 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:41.129035 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:41.129355 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:41.629076 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:41.629150 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:41.629485 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:42.129286 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:42.129362 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:42.129691 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:42.629456 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:42.629537 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:42.629869 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:42.629938 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:43.129673 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:43.129756 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:43.130100 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:43.628809 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:43.628889 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:43.629208 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:44.128939 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:44.129019 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:44.129378 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:44.629097 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:44.629182 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:44.629521 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:45.129310 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:45.129387 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:45.129760 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:45.129832 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:45.629562 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:45.629633 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:45.630029 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:46.128691 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:46.128772 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:46.129112 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:46.628845 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:46.628920 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:46.629291 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:47.129029 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:47.129126 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:47.129500 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:47.629337 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:47.629420 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:47.629741 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:47.629802 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:48.129626 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:48.129722 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:48.130077 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:48.628742 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:48.628836 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:48.629189 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:49.128743 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:49.128827 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:49.129185 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:49.340493 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:56:49.391267 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:49.391322 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:49.391344 1653676 retry.go:31] will retry after 22.530122873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:49.629701 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:49.629775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:49.630094 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:49.630167 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:50.128781 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:50.128853 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:50.129231 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:50.628838 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:50.628912 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:50.629276 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:51.129234 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:51.129318 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:51.129637 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:51.629350 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:51.629441 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:51.629759 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:52.129549 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:52.129656 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:52.129995 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:52.130058 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:52.628710 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:52.628778 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:52.629090 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:53.128873 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:53.128994 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:53.129417 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:53.629155 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:53.629225 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:53.629551 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:54.129336 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:54.129409 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:54.129789 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:54.629582 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:54.629657 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:54.629978 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:54.630042 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:55.128737 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:55.128827 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:55.129209 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:55.629562 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:55.629630 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:55.629995 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:56.129718 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:56.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:56.130127 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:56.628839 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:56.628957 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:56.629326 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:57.129049 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:57.129165 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:57.129545 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:57.129614 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:57.161690 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:56:57.212094 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:57.212172 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:57.212321 1653676 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 08:56:57.629703 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:57.629786 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:57.630137 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:58.128910 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:58.128986 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:58.129348 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:58.629128 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:58.629212 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:58.629557 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:59.129348 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:59.129423 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:59.129768 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:59.129831 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:59.629552 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:59.629630 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:59.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:00.128668 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:00.128748 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:00.129104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:00.628883 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:00.628972 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:00.629344 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:01.128990 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:01.129091 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:01.129447 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:01.629187 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:01.629284 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:01.629625 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:01.629697 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:02.129438 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:02.129511 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:02.129847 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:02.629620 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:02.629714 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:02.630041 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:03.128760 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:03.128862 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:03.129196 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:03.628968 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:03.629065 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:03.629415 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:04.129145 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:04.129220 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:04.129570 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:04.129643 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:04.629351 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:04.629445 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:04.629746 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:05.129583 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:05.129661 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:05.129993 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:05.628708 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:05.628794 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:05.629079 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:06.128832 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:06.128925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:06.129318 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:06.629043 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:06.629138 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:06.629480 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:06.629558 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:07.129326 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:07.129425 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:07.129785 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:07.629601 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:07.629694 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:07.630065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:08.128801 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:08.128909 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:08.129315 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:08.629044 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:08.629145 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:08.629528 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:08.629593 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:09.129358 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:09.129453 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:09.129910 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:09.629675 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:09.629754 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:09.630073 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:10.128808 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:10.128885 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:10.129234 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:10.628993 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:10.629089 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:10.629434 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:11.129231 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:11.129347 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:11.129707 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:11.129770 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:11.629527 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:11.629607 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:11.629894 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:11.922305 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:57:11.970691 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:57:11.973096 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:57:11.973263 1653676 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 08:57:11.975142 1653676 out.go:177] * Enabled addons: 
	I0804 08:57:11.976503 1653676 addons.go:514] duration metric: took 1m43.454009966s for enable addons: enabled=[]
	I0804 08:57:12.129480 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:12.129579 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:12.129915 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:12.629535 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:12.629640 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:12.629960 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:13.129603 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:13.129676 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:13.130018 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:13.130084 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:13.629651 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:13.629730 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:13.630028 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:14.129674 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:14.129818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:14.130187 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:14.628738 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:14.628810 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:14.629106 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:15.128681 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:15.128756 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:15.129116 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:15.628700 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:15.628781 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:15.629089 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:15.629155 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:16.128845 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:16.128921 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:16.129302 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:16.628840 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:16.628918 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:16.629233 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:17.128809 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:17.128893 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:17.129257 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:17.628792 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:17.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:17.629202 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:17.629293 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:18.128759 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:18.128847 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:18.129200 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:18.629041 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:18.629121 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:18.629468 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:19.129039 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:19.129112 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:19.129489 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:19.629035 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:19.629105 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:19.629466 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:19.629532 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:20.129056 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:20.129136 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:20.129527 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:20.629075 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:20.629154 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:20.629482 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:21.129294 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:21.129367 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:21.129717 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:21.629359 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:21.629463 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:21.629764 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:21.629831 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:22.129365 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:22.129439 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:22.129781 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:22.629426 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:22.629501 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:22.629789 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:23.129450 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:23.129535 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:23.129870 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:23.629332 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:23.629418 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:23.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:24.128868 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:24.128960 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:24.129333 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:24.129416 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:24.628863 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:24.628939 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:24.629295 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:25.128809 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:25.128887 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:25.129269 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:25.629006 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:25.629081 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:25.629396 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:26.129192 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:26.129303 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:26.129672 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:26.129741 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:26.629536 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:26.629611 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:26.629914 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:27.129705 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:27.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:27.130156 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:27.628879 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:27.628961 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:27.629280 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:28.129023 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:28.129114 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:28.129510 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:28.629296 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:28.629387 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:28.629697 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:28.629765 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:29.129519 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:29.129613 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:29.129968 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:29.628696 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:29.628770 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:29.629059 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:30.128786 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:30.128880 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:30.129235 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:30.628979 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:30.629054 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:30.629304 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:31.129276 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:31.129363 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:31.129719 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:31.129793 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:31.629528 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:31.629615 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:31.629920 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:32.128690 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:32.128765 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:32.129098 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:32.628838 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:32.628956 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:32.629288 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:33.129003 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:33.129091 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:33.129461 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:33.629193 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:33.629295 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:33.629610 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:33.629682 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:34.129449 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:34.129539 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:34.129898 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:34.629687 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:34.629766 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:34.630068 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:35.128782 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:35.128868 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:35.129222 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:35.628979 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:35.629051 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:35.629387 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:36.129189 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:36.129297 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:36.129671 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:36.129763 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:36.629508 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:36.629584 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:36.629873 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:37.129696 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:37.129776 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:37.130132 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:37.628857 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:37.628938 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:37.629221 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:38.128990 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:38.129078 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:38.129487 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:38.629184 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:38.629289 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:38.629594 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:38.629667 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:39.129364 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:39.129441 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:39.129810 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:39.629603 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:39.629674 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:39.629968 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:40.128718 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:40.128797 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:40.129178 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:40.628945 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:40.629021 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:40.629364 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:41.129136 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:41.129253 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:41.129612 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:41.129682 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:41.629452 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:41.629530 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:41.629831 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:42.129618 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:42.129707 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:42.130079 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:42.628760 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:42.628838 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:42.629155 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:43.128868 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:43.128970 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:43.129365 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:43.629090 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:43.629163 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:43.629503 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:43.629565 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:44.129335 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:44.129433 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:44.129785 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:44.629577 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:44.629649 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:44.629949 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:45.128664 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:45.128759 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:45.129131 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:45.628854 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:45.628932 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:45.629229 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:46.128970 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:46.129047 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:46.129442 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:46.129517 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:46.629268 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:46.629344 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:46.629668 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:47.129457 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:47.129529 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:47.129867 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:47.629659 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:47.629734 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:47.630045 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:48.128764 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:48.128839 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:48.129183 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:48.628996 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:48.629085 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:48.629417 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:48.629493 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:49.129179 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:49.129288 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:49.129668 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:49.629441 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:49.629513 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:49.629806 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:50.129603 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:50.129678 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:50.130019 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:50.628730 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:50.628803 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:50.629119 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:51.128835 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:51.128916 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:51.129293 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:51.129364 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:51.629058 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:51.629136 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:51.629474 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:52.129201 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:52.129298 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:52.129723 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:52.629568 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:52.629654 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:52.630018 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:53.128764 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:53.128844 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:53.129204 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:53.628946 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:53.629019 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:53.629368 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:53.629442 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:54.129146 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:54.129225 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:54.129608 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:54.629341 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:54.629417 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:54.629719 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:55.129545 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:55.129619 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:55.129967 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:55.628701 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:55.628776 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:55.629095 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:56.128809 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:56.128887 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:56.129279 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:56.129347 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:56.629019 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:56.629096 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:56.629435 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:57.129166 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:57.129283 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:57.129655 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:57.629456 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:57.629534 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:57.629859 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:58.129657 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:58.129755 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:58.130109 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:58.130182 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:58.628778 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:58.628892 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:58.629216 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:59.128942 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:59.129046 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:59.129427 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:59.629154 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:59.629257 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:59.629579 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:00.129357 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:00.129459 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:00.129797 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:00.629587 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:00.629677 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:00.630022 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:00.630087 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:01.128755 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:01.128831 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:01.129179 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:01.628959 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:01.629054 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:01.629420 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:02.129182 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:02.129295 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:02.129668 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:02.629476 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:02.629572 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:02.629862 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:03.129679 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:03.129759 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:03.130099 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:03.130172 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:03.628846 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:03.628948 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:03.629308 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:04.129055 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:04.129134 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:04.129501 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:04.629285 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:04.629371 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:04.629678 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:05.129485 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:05.129556 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:05.129895 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:05.629689 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:05.629775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:05.630092 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:05.630166 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:06.128794 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:06.128884 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:06.129262 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:06.628981 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:06.629094 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:06.629442 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:07.129153 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:07.129236 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:07.129612 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:07.629373 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:07.629460 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:07.629767 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:08.129560 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:08.129642 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:08.129999 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:08.130067 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:08.628667 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:08.628761 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:08.629105 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:09.128826 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:09.128902 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:09.129208 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:09.628951 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:09.629038 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:09.629355 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:10.129067 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:10.129144 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:10.129526 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:10.629346 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:10.629440 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:10.629755 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:10.629825 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:11.129536 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:11.129607 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:11.129931 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:11.628656 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:11.628740 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:11.629041 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:12.128773 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:12.128847 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:12.129188 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:12.628944 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:12.629039 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:12.629370 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:13.129112 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:13.129185 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:13.129528 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:13.129601 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:13.628854 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:13.628929 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:13.629262 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:14.129022 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:14.129107 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:14.129456 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:14.629179 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:14.629262 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:14.629560 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:15.129358 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:15.129438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:15.129768 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:15.129842 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:15.629588 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:15.629663 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:15.629993 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:16.128722 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:16.128807 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:16.129155 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:16.628888 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:16.628968 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:16.629289 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:17.128871 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:17.128958 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:17.129331 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:17.629089 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:17.629163 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:17.629498 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:17.629579 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:18.129331 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:18.129413 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:18.129748 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:18.629352 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:18.629431 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:18.629731 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:19.129531 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:19.129601 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:19.129926 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:19.629715 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:19.629793 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:19.630096 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:19.630165 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:20.128817 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:20.128892 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:20.129221 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:20.628986 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:20.629062 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:20.629379 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:21.129140 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:21.129256 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:21.129611 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:21.629346 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:21.629422 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:21.629705 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:22.129503 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:22.129592 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:22.129936 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:22.130013 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:22.628702 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:22.628771 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:22.629065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:23.128773 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:23.128856 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:23.129193 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:23.628915 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:23.629017 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:23.629329 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:24.129041 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:24.129130 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:24.129485 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:24.629265 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:24.629368 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:24.629656 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:24.629721 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:25.129446 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:25.129542 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:25.129838 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:25.629614 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:25.629692 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:25.630005 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:26.128734 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:26.128822 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:26.129143 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:26.628855 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:26.628945 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:26.629295 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:27.129001 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:27.129078 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:27.129430 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:27.129497 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:27.629154 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:27.629226 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:27.629562 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:28.129344 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:28.129447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:28.129769 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:28.629456 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:28.629542 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:28.629856 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:29.129664 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:29.129750 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:29.130110 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:29.130200 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:29.628750 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:29.628825 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:29.629116 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:30.128860 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:30.128943 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:30.129300 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:30.629025 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:30.629107 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:30.629409 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:31.129309 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:31.129383 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:31.129732 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:31.629506 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:31.629578 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:31.629869 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:31.629930 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:32.129669 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:32.129745 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:32.130096 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:32.628810 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:32.628890 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:32.629161 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:33.128895 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:33.128972 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:33.129352 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:33.629078 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:33.629161 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:33.629537 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:34.129351 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:34.129430 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:34.129807 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:34.129887 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:34.629642 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:34.629714 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:34.630028 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:35.128785 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:35.128867 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:35.129207 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:35.628963 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:35.629038 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:35.629350 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:36.129133 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:36.129206 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:36.129495 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:36.629057 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:36.629152 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:36.629476 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:36.629541 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:37.129344 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:37.129435 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:37.129779 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:37.629589 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:37.629665 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:37.629987 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:38.128723 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:38.128818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:38.129170 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:38.628949 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:38.629043 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:38.629367 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:39.129078 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:39.129177 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:39.129555 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:39.129622 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:39.629381 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:39.629467 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:39.629800 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:40.129606 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:40.129705 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:40.130062 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:40.628786 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:40.628889 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:40.629233 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:41.129024 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:41.129100 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:41.129462 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:41.629280 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:41.629379 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:41.629701 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:41.629762 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:42.129521 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:42.129597 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:42.129950 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:42.628667 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:42.628756 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:42.629073 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:43.128819 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:43.128897 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:43.129279 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:43.629033 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:43.629148 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:43.629489 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:44.129324 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:44.129407 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:44.129750 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:44.129816 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:44.629574 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:44.629658 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:44.629972 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:45.128703 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:45.128778 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:45.129125 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:45.628842 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:45.628933 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:45.629252 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:46.128948 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:46.129033 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:46.129380 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:46.629108 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:46.629185 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:46.629520 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:46.629580 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:47.129340 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:47.129419 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:47.129767 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:47.629563 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:47.629638 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:47.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:48.128670 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:48.128751 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:48.129104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:48.629702 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:48.629776 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:48.630085 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:48.630146 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:49.128823 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:49.128899 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:49.129229 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:49.628981 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:49.629065 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:49.629392 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:50.129122 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:50.129198 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:50.129554 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:50.629352 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:50.629447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:50.629788 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:51.129551 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:51.129636 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:51.129966 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:51.130030 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:51.628723 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:51.628822 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:51.629134 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:52.128861 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:52.128966 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:52.129334 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:52.629047 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:52.629124 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:52.629436 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:53.129166 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:53.129271 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:53.129578 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:53.629347 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:53.629425 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:53.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:53.629789 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:54.129531 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:54.129608 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:54.130022 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:54.628732 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:54.628807 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:54.629107 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:55.128818 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:55.128901 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:55.129281 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:55.629003 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:55.629084 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:55.629411 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:56.129310 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:56.129399 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:56.129752 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:56.129817 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:56.629559 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:56.629638 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:56.629927 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:57.129729 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:57.129818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:57.130192 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:57.628939 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:57.629019 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:57.629349 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:58.129065 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:58.129186 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:58.129616 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:58.629318 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:58.629398 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:58.629699 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:58.629757 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:59.129513 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:59.129603 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:59.129965 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:59.628703 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:59.628781 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:59.629083 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:00.128805 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:00.128896 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:00.129279 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:00.629019 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:00.629098 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:00.629464 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:01.129270 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:01.129348 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:01.129717 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:01.129794 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:01.629537 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:01.629608 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:01.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:02.128689 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:02.128769 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:02.129142 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:02.628902 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:02.628987 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:02.629315 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:03.129038 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:03.129117 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:03.129496 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:03.629371 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:03.629457 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:03.629773 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:03.629837 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:04.129591 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:04.129684 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:14.133399 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10003
	W0804 08:59:14.133474 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:59:14.133535 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:14.133571 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:24.134577 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10000
	W0804 08:59:24.134670 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:59:24.134743 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:24.134791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:24.447100 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=312
	I0804 08:59:25.448003 1653676 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8441/api/v1/nodes/functional-699837"
	I0804 08:59:25.448109 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:25.448371 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:25.448473 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:25.448503 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:25.448708 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:25.629198 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:25.629320 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:25.629693 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:26.129362 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:26.129438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:26.129786 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:26.629562 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:26.629634 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:26.629913 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:26.629981 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:27.129710 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:27.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:27.130145 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:27.628843 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:27.628915 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:27.629211 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:28.128958 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:28.129049 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:28.129414 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:28.629057 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:28.629131 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:28.629437 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:29.129142 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:29.129215 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:29.129570 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:29.129634 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:29.629351 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:29.629434 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:29.629732 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:30.129550 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:30.129627 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:30.129981 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:30.628711 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:30.628785 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:30.629088 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:31.128761 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:31.128837 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:31.129194 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:31.628935 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:31.629013 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:31.629357 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:31.629423 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:32.129102 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:32.129207 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:32.129598 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:32.629343 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:32.629412 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:32.629682 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:33.129483 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:33.129571 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:33.129937 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:33.628685 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:33.628761 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:33.629071 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:34.128794 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:34.128880 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:34.129196 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:34.129292 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:34.628955 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:34.629026 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:34.629332 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:35.129092 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:35.129172 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:35.129540 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:35.629393 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:35.629466 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:35.629788 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:36.129551 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:36.129629 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:36.129981 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:36.130049 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:36.628714 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:36.628796 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:36.629109 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:37.128919 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:37.128993 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:37.129345 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:37.629059 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:37.629147 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:37.629463 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:38.129234 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:38.129326 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:38.129664 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:38.629351 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:38.629432 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:38.629732 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:38.629805 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:39.129576 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:39.129650 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:39.129997 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:39.628740 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:39.628825 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:39.629123 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:40.128863 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:40.128946 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:40.129324 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:40.629061 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:40.629132 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:40.629464 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:41.129329 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:41.129415 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:41.129770 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:41.129836 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:41.629564 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:41.629638 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:41.629926 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:42.129712 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:42.129803 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:42.130147 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:42.628855 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:42.628932 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:42.629230 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:43.128970 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:43.129055 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:43.129407 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:43.629110 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:43.629193 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:43.629549 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:43.629613 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:44.129360 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:44.129442 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:44.129809 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:44.629604 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:44.629695 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:44.629982 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:45.128765 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:45.128844 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:45.129221 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:45.628969 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:45.629067 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:45.629365 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:46.129219 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:46.129334 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:46.129701 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:46.129778 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:46.629522 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:46.629594 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:46.629887 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:47.129668 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:47.129774 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:47.130135 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:47.628848 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:47.628924 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:47.629222 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:48.128974 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:48.129074 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:48.129460 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:48.629189 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:48.629275 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:48.629575 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:48.629637 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:49.129365 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:49.129460 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:49.129826 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:49.629589 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:49.629663 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:49.629948 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:50.128684 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:50.128784 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:50.129153 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:50.628866 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:50.628940 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:50.629236 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:51.128964 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:51.129053 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:51.129443 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:51.129520 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:51.629181 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:51.629285 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:51.629597 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:52.129363 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:52.129439 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:52.129782 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:52.629564 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:52.629637 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:52.629921 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:53.128676 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:53.128760 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:53.129117 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:53.628840 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:53.628925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:53.629216 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:53.629319 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:54.129011 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:54.129119 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:54.129458 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:54.629169 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:54.629255 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:54.629563 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:55.129370 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:55.129456 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:55.129803 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:55.629586 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:55.629656 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:55.629948 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:55.630021 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:56.129716 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:56.129807 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:56.130158 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:56.628872 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:56.628960 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:56.629280 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:57.129030 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:57.129134 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:57.129533 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:57.629322 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:57.629394 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:57.629681 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:58.129475 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:58.129571 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:58.129969 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:58.130041 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:58.629691 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:58.629768 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:58.630065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:59.128782 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:59.128877 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:59.129234 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:59.628979 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:59.629051 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:59.629387 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:00.129109 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:00.129205 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:00.129657 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:00.629456 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:00.629529 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:00.629872 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:00.629939 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:01.129658 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:01.129735 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:01.130048 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:01.628777 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:01.628856 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:01.629190 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:02.128935 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:02.129010 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:02.129319 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:02.628797 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:02.628877 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:02.629137 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:03.128821 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:03.128896 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:03.129167 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:03.129224 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:03.628891 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:03.628974 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:03.629299 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:04.129012 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:04.129096 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:04.129462 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:04.629177 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:04.629276 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:04.629597 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:05.129034 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:05.129129 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:05.129588 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:05.129664 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:05.629416 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:05.629491 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:05.629807 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:06.129708 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:06.129798 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:06.130177 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:06.628914 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:06.628986 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:06.629309 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:07.129052 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:07.129152 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:07.129545 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:07.629359 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:07.629447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:07.629774 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:07.629843 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:08.129619 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:08.129703 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:08.130076 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:08.628794 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:08.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:08.629209 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:09.128966 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:09.129044 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:09.129548 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:09.629398 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:09.629478 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:09.629790 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:10.129602 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:10.129686 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:10.130062 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:10.130134 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:10.628810 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:10.628888 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:10.629214 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:11.128747 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:11.128824 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:11.129152 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:11.628878 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:11.628954 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:11.629286 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:12.129028 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:12.129106 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:12.129473 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:12.629262 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:12.629338 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:12.629618 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:12.629689 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:13.129417 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:13.129501 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:13.129842 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:13.629621 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:13.629693 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:13.629988 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:14.128745 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:14.128832 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:14.129178 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:14.628945 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:14.629017 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:14.629397 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:15.129144 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:15.129234 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:15.129617 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:15.129699 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:15.629451 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:15.629537 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:15.629859 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:16.129648 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:16.129725 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:16.130080 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:16.628842 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:16.628922 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:16.629262 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:17.128979 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:17.129061 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:17.129404 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:17.629119 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:17.629192 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:17.629516 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:17.629592 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:18.129336 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:18.129414 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:18.129755 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:18.629486 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:18.629564 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:18.629881 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:19.129669 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:19.129760 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:19.130101 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:19.628816 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:19.628890 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:19.629175 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:20.128910 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:20.128984 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:20.129330 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:20.129401 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:20.629078 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:20.629168 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:20.629501 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:21.129330 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:21.129424 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:21.129762 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:21.629541 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:21.629617 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:21.629961 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:22.128702 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:22.128777 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:22.129131 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:22.628835 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:22.628922 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:22.629266 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:22.629330 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:23.128997 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:23.129087 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:23.129464 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:23.629182 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:23.629286 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:23.629610 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:24.129357 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:24.129433 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:24.129789 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:24.629580 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:24.629654 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:24.630004 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:24.630071 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:25.128772 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:25.128875 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:25.129222 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:25.628964 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:25.629038 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:25.629409 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:26.129166 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:26.129260 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:26.129614 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:26.629352 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:26.629430 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:26.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:27.129507 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:27.129584 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:27.129930 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:27.129995 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:27.628677 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:27.628763 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:27.629122 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:28.128831 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:28.128925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:28.129213 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:28.629034 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:28.629122 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:28.629430 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:29.129177 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:29.129276 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:29.129670 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:29.629478 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:29.629549 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:29.629842 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:29.629908 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:30.129649 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:30.129723 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:30.130078 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:30.628813 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:30.628886 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:30.629190 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:31.128911 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:31.128986 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:31.129333 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:31.629040 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:31.629132 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:31.629470 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:32.129197 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:32.129290 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:32.129685 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:32.129763 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:32.629496 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:32.629568 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:32.629869 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:33.129687 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:33.129771 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:33.130108 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:33.628818 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:33.628897 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:33.629202 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:34.128946 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:34.129020 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:34.129415 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:34.629147 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:34.629219 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:34.629558 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:34.629628 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:35.129369 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:35.129455 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:35.129805 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:35.629601 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:35.629676 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:35.629982 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:36.128679 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:36.128768 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:36.129121 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:36.628838 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:36.628914 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:36.629211 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:37.128955 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:37.129054 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:37.129433 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:37.129502 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:37.629160 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:37.629260 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:37.629562 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:38.129342 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:38.129438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:38.129787 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:38.629253 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:38.629328 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:38.629641 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:39.129419 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:39.129511 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:39.129853 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:39.129927 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:39.629656 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:39.629726 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:39.630015 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:40.128736 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:40.128824 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:40.129162 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:40.628753 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:40.628832 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:40.629116 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:41.128932 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:41.129010 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:41.129303 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:41.629089 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:41.629196 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:41.629513 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:41.629580 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:42.129349 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:42.129434 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:42.129769 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:42.629554 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:42.629629 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:42.629873 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:43.129642 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:43.129720 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:43.130046 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:43.628744 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:43.628817 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:43.629115 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:44.128831 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:44.128907 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:44.129297 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:44.129364 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:44.629025 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:44.629100 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:44.629418 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:45.129142 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:45.129218 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:45.129572 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:45.629352 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:45.629425 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:45.629726 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:46.129360 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:46.129445 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:46.129788 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:46.129856 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:46.629588 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:46.629667 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:46.629948 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:47.128666 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:47.128744 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:47.129078 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:47.628771 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:47.628847 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:47.629196 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:48.128923 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:48.129000 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:48.129363 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:48.629072 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:48.629151 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:48.629471 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:48.629534 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:49.129296 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:49.129375 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:49.129725 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:49.629524 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:49.629595 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:49.629882 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:50.129670 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:50.129763 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:50.130141 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:50.628871 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:50.628953 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:50.629283 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:51.129015 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:51.129090 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:51.129476 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:51.129545 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:51.629293 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:51.629378 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:51.629669 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:52.129450 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:52.129528 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:52.129859 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:52.629654 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:52.629726 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:52.630058 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:53.128778 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:53.128856 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:53.129197 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:53.628936 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:53.629015 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:53.629344 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:53.629420 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:54.129104 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:54.129196 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:54.129579 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:54.629357 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:54.629426 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:54.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:55.129436 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:55.129536 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:55.129882 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:55.629646 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:55.629719 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:55.630035 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:55.630107 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:56.128773 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:56.128845 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:56.129181 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:56.628950 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:56.629034 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:56.629378 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:57.129105 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:57.129181 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:57.129559 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:57.629369 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:57.629438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:57.629742 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:58.129515 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:58.129595 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:58.129950 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:58.130034 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:58.628750 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:58.628830 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:58.629147 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:59.128851 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:59.128928 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:59.129309 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:59.629042 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:59.629121 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:59.629455 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:00.129167 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:00.129270 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:00.129632 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:00.629423 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:00.629498 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:00.629793 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:00.629863 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:01.129591 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:01.129676 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:01.130023 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:01.628726 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:01.628804 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:01.629104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:02.128841 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:02.128936 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:02.129299 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:02.629029 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:02.629126 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:02.629455 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:03.129199 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:03.129305 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:03.129646 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:03.129706 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:03.629451 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:03.629523 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:03.629841 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:04.129677 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:04.129766 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:04.130114 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:04.628842 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:04.628925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:04.629305 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:05.129074 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:05.129179 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:05.129561 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:05.629356 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:05.629434 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:05.629760 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:05.629824 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:06.129613 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:06.129693 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:06.130038 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:06.628772 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:06.628866 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:06.629198 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:07.128967 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:07.129056 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:07.129446 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:07.629172 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:07.629271 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:07.629622 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:08.129431 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:08.129524 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:08.129883 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:08.129948 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:08.629670 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:08.629754 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:08.630071 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:09.128820 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:09.128899 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:09.129287 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:09.629017 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:09.629101 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:09.629445 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:10.129193 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:10.129297 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:10.129649 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:10.629427 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:10.629501 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:10.629814 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:10.629890 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:11.129612 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:11.129692 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:11.129995 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:11.628703 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:11.628780 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:11.629047 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:12.128784 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:12.128867 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:12.129223 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:12.628955 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:12.629067 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:12.629416 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:13.129129 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:13.129206 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:13.129596 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:13.129670 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:13.629350 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:13.629433 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:13.629735 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:14.129533 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:14.129618 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:14.129952 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:14.628687 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:14.628782 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:14.629096 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:15.128811 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:15.128888 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:15.129232 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:15.628958 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:15.629043 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:15.629372 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:15.629444 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:16.129169 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:16.129269 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:16.129671 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:16.629474 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:16.629546 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:16.629863 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:17.129648 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:17.129733 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:17.130077 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:17.628801 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:17.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:17.629169 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:18.128883 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:18.128963 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:18.129324 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:18.129398 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:18.629048 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:18.629135 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:18.629454 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:19.129179 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:19.129268 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:19.129621 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:19.629351 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:19.629424 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:19.629708 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:20.129508 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:20.129585 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:20.129925 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:20.129994 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:20.628667 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:20.628737 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:20.629038 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:21.128739 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:21.128822 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:21.129169 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:21.628882 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:21.628954 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:21.629266 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:22.128994 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:22.129070 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:22.129426 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:22.629135 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:22.629221 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:22.629538 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:22.629601 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:23.129384 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:23.129466 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:23.129808 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:23.629595 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:23.629669 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:23.629984 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:24.128733 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:24.128814 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:24.129170 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:24.629511 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:24.629630 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:24.630004 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:24.630069 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:25.128773 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:25.128859 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:25.129232 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:25.629077 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:25.629159 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:25.629492 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:26.129299 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:26.129377 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:26.129704 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:26.629492 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:26.629562 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:26.629872 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:27.129668 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:27.129753 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:27.130132 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:27.130203 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:27.628888 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:27.628961 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:27.629299 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:28.129030 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:28.129106 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:28.129492 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:28.629210 1653676 node_ready.go:38] duration metric: took 6m0.000644351s for node "functional-699837" to be "Ready" ...
	I0804 09:01:28.630996 1653676 out.go:201] 
	W0804 09:01:28.631963 1653676 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W0804 09:01:28.631975 1653676 out.go:270] * 
	W0804 09:01:28.633557 1653676 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 09:01:28.634655 1653676 out.go:201] 
	
	
	==> Docker <==
	Aug 04 08:55:25 functional-699837 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Aug 04 08:55:25 functional-699837 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Aug 04 08:55:25 functional-699837 systemd[1]: cri-docker.service: Deactivated successfully.
	Aug 04 08:55:25 functional-699837 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Aug 04 08:55:25 functional-699837 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Start docker client with request timeout 0s"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Loaded network plugin cni"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Docker cri networking managed by network plugin cni"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Setting cgroupDriver cgroupfs"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Start cri-dockerd grpc backend"
	Aug 04 08:55:25 functional-699837 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Aug 04 08:55:26 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a670d9d90ef4b3f9c8a2229b07375783d2742e14cb8b08de1d1d609352b31ca9/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 08:55:26 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6196286ba923f262b934ea01e1a6c54ba05e38908d2ce0251696c08a8b6e4e4f/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 08:55:26 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/87c98d51b11aa2b27ab051d1a1e76c991403967dc4bbed5c8865a1c8839a006c/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 08:55:26 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4dc39892c792c69f93a9689deb4a22058aa932aaab9b5a2ef60fe93066740a6a/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 08:56:16 functional-699837 dockerd[7186]: time="2025-08-04T08:56:16.274092329Z" level=info msg="ignoring event" container=6a82f093dfdcc77dca8bafe4751718938b424c4cd13715b8c25f8c91d4094c87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 08:56:25 functional-699837 dockerd[7186]: time="2025-08-04T08:56:25.952124711Z" level=info msg="ignoring event" container=d11d953e110f7fac9239023c8f301d3ea182fcc19934837d8f119e7d945ae14a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 08:56:55 functional-699837 dockerd[7186]: time="2025-08-04T08:56:55.721506604Z" level=info msg="ignoring event" container=340fbe431c80ae67951d4d3de5dbda3a7af1fd7b79b5e3706e0b82c0e360bf2b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 08:59:24 functional-699837 dockerd[7186]: time="2025-08-04T08:59:24.457189004Z" level=info msg="ignoring event" container=a70a68ec61693decabdce1681f5a849ba6740bf7abf9db4339c54ccb1b99a359 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 08:59:32 functional-699837 dockerd[7186]: time="2025-08-04T08:59:32.204638673Z" level=info msg="ignoring event" container=2fafac7520c8d0e9a9ddb8e73ffb49294146ab4a5f8bce024822ab9f4fdcd5bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2fafac7520c8d       9ad783615e1bc       2 minutes ago       Exited              kube-controller-manager   6                   87c98d51b11aa       kube-controller-manager-functional-699837
	a70a68ec61693       d85eea91cc41d       2 minutes ago       Exited              kube-apiserver            6                   6196286ba923f       kube-apiserver-functional-699837
	340fbe431c80a       1e30c0b1e9b99       4 minutes ago       Exited              etcd                      6                   a670d9d90ef4b       etcd-functional-699837
	3206d43d6e58f       21d34a2aeacf5       6 minutes ago       Running             kube-scheduler            2                   4dc39892c792c       kube-scheduler-functional-699837
	0cb03d71b984f       21d34a2aeacf5       6 minutes ago       Exited              kube-scheduler            1                   cdae8372eae9d       kube-scheduler-functional-699837
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:01:39.533414   10136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:01:39.533933   10136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:01:39.535555   10136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:01:39.535990   10136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:01:39.537569   10136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000488] IPv4: martian source 10.244.0.33 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[  +0.000590] IPv4: martian source 10.244.0.33 from 10.244.0.7, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ee 17 d6 72 58 d4 08 06
	[ +20.425373] IPv4: martian source 10.244.0.36 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 2e 04 ae c5 a3 08 06
	[  +0.708699] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[Aug 4 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 4d a6 d6 4c 9f 08 06
	[Aug 4 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 38 7f 58 31 63 08 06
	[ +30.193533] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 b7 61 9c 47 84 08 06
	[Aug 4 08:45] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a d0 26 e8 7c d1 08 06
	[Aug 4 08:46] FS-Cache: Duplicate cookie detected
	[  +0.004807] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006832] FS-Cache: O-cookie d=000000003739c6e4{9P.session} n=000000001b482ea5
	[  +0.007607] FS-Cache: O-key=[10] '34333332323039333239'
	[  +0.005436] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006682] FS-Cache: N-cookie d=000000003739c6e4{9P.session} n=00000000e0b3994b
	[  +0.007609] FS-Cache: N-key=[10] '34333332323039333239'
	[  +5.882110] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 55 4a ac 47 cd 08 06
	
	
	==> etcd [340fbe431c80] <==
	flag provided but not defined: -proxy-refresh-interval
	Usage:
	
	  etcd [flags]
	    Start an etcd server.
	
	  etcd --version
	    Show the version of etcd.
	
	  etcd -h | --help
	    Show the help information about etcd.
	
	  etcd --config-file
	    Path to the server configuration file. Note that if a configuration file is provided, other command line flags and environment variables will be ignored.
	
	  etcd gateway
	    Run the stateless pass-through etcd TCP connection forwarding proxy.
	
	  etcd grpc-proxy
	    Run the stateless etcd v3 gRPC L7 reverse proxy.
	
	
	
	==> kernel <==
	 09:01:39 up 1 day, 17:43,  0 users,  load average: 0.38, 0.14, 0.36
	Linux functional-699837 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [a70a68ec6169] <==
	W0804 08:59:04.426148       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:04.426280       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 08:59:04.427463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 08:59:04.434192       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 08:59:04.440592       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 08:59:04.440613       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 08:59:04.440846       1 instance.go:232] Using reconciler: lease
	W0804 08:59:04.441668       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:04.441684       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:05.427410       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:05.427410       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:05.441981       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:07.008411       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:07.025679       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:07.166787       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:09.765027       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:09.806488       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:10.063522       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:13.932343       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:14.037582       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:14.089064       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:19.259004       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:19.470708       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:20.945736       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 08:59:24.442401       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [2fafac7520c8] <==
	I0804 08:59:11.887703       1 serving.go:386] Generated self-signed cert in-memory
	I0804 08:59:12.166874       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 08:59:12.166898       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 08:59:12.168293       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 08:59:12.168315       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 08:59:12.168600       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 08:59:12.168727       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 08:59:32.171192       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-scheduler [0cb03d71b984] <==
	
	
	==> kube-scheduler [3206d43d6e58] <==
	E0804 09:00:23.348524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:00:28.563885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:00:32.014424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:00:33.033677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 09:00:47.281529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:00:47.653383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 09:00:48.988484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 09:00:54.836226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 09:00:54.975251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:00:57.394600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:00:59.500812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:01:00.013055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 09:01:00.539902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:01:01.692270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 09:01:02.088398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:01:08.204402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 09:01:09.352314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:01:11.128294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:01:23.683836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:01:24.236788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 09:01:31.276535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:01:35.817387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:01:38.102719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 09:01:38.258043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:01:39.576625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	
	
	==> kubelet <==
	Aug 04 09:01:23 functional-699837 kubelet[4226]: E0804 09:01:23.481137    4226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:01:24 functional-699837 kubelet[4226]: E0804 09:01:24.396607    4226 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588443569dee4d  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 08:51:19.611674189 +0000 UTC m=+0.322961923,LastTimestamp:2025-08-04 08:51:19.611674189 +0000 UTC m=+0.322961923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:01:24 functional-699837 kubelet[4226]: E0804 09:01:24.466107    4226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:01:27 functional-699837 kubelet[4226]: E0804 09:01:27.706024    4226 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Aug 04 09:01:27 functional-699837 kubelet[4226]: E0804 09:01:27.936556    4226 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Aug 04 09:01:28 functional-699837 kubelet[4226]: E0804 09:01:28.598604    4226 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:01:29 functional-699837 kubelet[4226]: E0804 09:01:29.657833    4226 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	Aug 04 09:01:30 functional-699837 kubelet[4226]: I0804 09:01:30.482479    4226 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:01:30 functional-699837 kubelet[4226]: E0804 09:01:30.482883    4226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:01:31 functional-699837 kubelet[4226]: E0804 09:01:31.467464    4226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:01:31 functional-699837 kubelet[4226]: E0804 09:01:31.599251    4226 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:01:31 functional-699837 kubelet[4226]: I0804 09:01:31.599334    4226 scope.go:117] "RemoveContainer" containerID="2fafac7520c8d0e9a9ddb8e73ffb49294146ab4a5f8bce024822ab9f4fdcd5bd"
	Aug 04 09:01:31 functional-699837 kubelet[4226]: E0804 09:01:31.599476    4226 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-699837_kube-system(ed0b2fd0bf6ad62500e8494ab79d1a1a)\"" pod="kube-system/kube-controller-manager-functional-699837" podUID="ed0b2fd0bf6ad62500e8494ab79d1a1a"
	Aug 04 09:01:33 functional-699837 kubelet[4226]: E0804 09:01:33.392410    4226 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-699837&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Aug 04 09:01:34 functional-699837 kubelet[4226]: E0804 09:01:34.397801    4226 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588443569dee4d  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 08:51:19.611674189 +0000 UTC m=+0.322961923,LastTimestamp:2025-08-04 08:51:19.611674189 +0000 UTC m=+0.322961923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:01:36 functional-699837 kubelet[4226]: E0804 09:01:36.599152    4226 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:01:36 functional-699837 kubelet[4226]: I0804 09:01:36.599236    4226 scope.go:117] "RemoveContainer" containerID="340fbe431c80ae67951d4d3de5dbda3a7af1fd7b79b5e3706e0b82c0e360bf2b"
	Aug 04 09:01:36 functional-699837 kubelet[4226]: E0804 09:01:36.599395    4226 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=etcd pod=etcd-functional-699837_kube-system(33b890b5c0b95f8eaa124c566a17ff33)\"" pod="kube-system/etcd-functional-699837" podUID="33b890b5c0b95f8eaa124c566a17ff33"
	Aug 04 09:01:37 functional-699837 kubelet[4226]: I0804 09:01:37.484522    4226 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:01:37 functional-699837 kubelet[4226]: E0804 09:01:37.484947    4226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:01:37 functional-699837 kubelet[4226]: E0804 09:01:37.599579    4226 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:01:37 functional-699837 kubelet[4226]: I0804 09:01:37.599670    4226 scope.go:117] "RemoveContainer" containerID="a70a68ec61693decabdce1681f5a849ba6740bf7abf9db4339c54ccb1b99a359"
	Aug 04 09:01:37 functional-699837 kubelet[4226]: E0804 09:01:37.599814    4226 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-functional-699837_kube-system(2b39e4280fdde7528fa65c33493b517b)\"" pod="kube-system/kube-apiserver-functional-699837" podUID="2b39e4280fdde7528fa65c33493b517b"
	Aug 04 09:01:38 functional-699837 kubelet[4226]: E0804 09:01:38.468185    4226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:01:39 functional-699837 kubelet[4226]: E0804 09:01:39.658876    4226 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837: exit status 2 (259.405178ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-699837" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/MinikubeKubectlCmd (1.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/MinikubeKubectlCmdDirectly (1.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-699837 get pods
functional_test.go:758: (dbg) Non-zero exit: out/kubectl --context functional-699837 get pods: exit status 1 (92.218568ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:761: failed to run kubectl directly. args "out/kubectl --context functional-699837 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-699837
helpers_test.go:235: (dbg) docker inspect functional-699837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	        "Created": "2025-08-04T08:46:45.45274172Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1645232,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T08:46:45.480784715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hosts",
	        "LogPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef-json.log",
	        "Name": "/functional-699837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-699837:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-699837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	                "LowerDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/merged",
	                "UpperDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/diff",
	                "WorkDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-699837",
	                "Source": "/var/lib/docker/volumes/functional-699837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-699837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-699837",
	                "name.minikube.sigs.k8s.io": "functional-699837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "28a81d3856c88da8c1d30d5c1cccd74ba2a899c3397b78caf0ac9da484142038",
	            "SandboxKey": "/var/run/docker/netns/28a81d3856c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-699837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:c5:9a:18:f2:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "763070d9e7bba0803db69bf71eb608d56921d0bfd4c71a1d39d0701f7372b87c",
	                    "EndpointID": "83493e8c17b59326d8c479c2c0d7a5ded2cae3362a881c1ce8347b3f751ead15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-699837",
	                        "c369b96e23d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837: exit status 2 (261.764123ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 logs -n 25
helpers_test.go:252: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-114794 image ls --format short --alsologtostderr                                                                                         │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image   │ functional-114794 image ls --format yaml --alsologtostderr                                                                                          │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ ssh     │ functional-114794 ssh pgrep buildkitd                                                                                                               │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ image   │ functional-114794 image build -t localhost/my-image:functional-114794 testdata/build --alsologtostderr                                              │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image   │ functional-114794 image ls --format json --alsologtostderr                                                                                          │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image   │ functional-114794 image ls --format table --alsologtostderr                                                                                         │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image   │ functional-114794 image ls                                                                                                                          │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ delete  │ -p functional-114794                                                                                                                                │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ start   │ -p functional-699837 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ start   │ -p functional-699837 --alsologtostderr -v=8                                                                                                         │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 08:55 UTC │                     │
	│ cache   │ functional-699837 cache add registry.k8s.io/pause:3.1                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ functional-699837 cache add registry.k8s.io/pause:3.3                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ functional-699837 cache add registry.k8s.io/pause:latest                                                                                            │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ functional-699837 cache add minikube-local-cache-test:functional-699837                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ functional-699837 cache delete minikube-local-cache-test:functional-699837                                                                          │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                    │ minikube          │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ list                                                                                                                                                │ minikube          │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ ssh     │ functional-699837 ssh sudo crictl images                                                                                                            │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ ssh     │ functional-699837 ssh sudo docker rmi registry.k8s.io/pause:latest                                                                                  │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ ssh     │ functional-699837 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │                     │
	│ cache   │ functional-699837 cache reload                                                                                                                      │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ ssh     │ functional-699837 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                    │ minikube          │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                 │ minikube          │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ kubectl │ functional-699837 kubectl -- --context functional-699837 get pods                                                                                   │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 08:55:20
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 08:55:20.770600 1653676 out.go:345] Setting OutFile to fd 1 ...
	I0804 08:55:20.770872 1653676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:55:20.770883 1653676 out.go:358] Setting ErrFile to fd 2...
	I0804 08:55:20.770890 1653676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:55:20.771067 1653676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 08:55:20.771644 1653676 out.go:352] Setting JSON to false
	I0804 08:55:20.772653 1653676 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":149810,"bootTime":1754147911,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 08:55:20.772739 1653676 start.go:140] virtualization: kvm guest
	I0804 08:55:20.774597 1653676 out.go:177] * [functional-699837] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 08:55:20.775675 1653676 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 08:55:20.775678 1653676 notify.go:220] Checking for updates...
	I0804 08:55:20.776705 1653676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 08:55:20.777818 1653676 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:20.778845 1653676 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 08:55:20.779811 1653676 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 08:55:20.780885 1653676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 08:55:20.782127 1653676 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 08:55:20.782240 1653676 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 08:55:20.804704 1653676 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 08:55:20.804841 1653676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 08:55:20.850605 1653676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 08:55:20.841828701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 08:55:20.850698 1653676 docker.go:318] overlay module found
	I0804 08:55:20.852305 1653676 out.go:177] * Using the docker driver based on existing profile
	I0804 08:55:20.853166 1653676 start.go:304] selected driver: docker
	I0804 08:55:20.853179 1653676 start.go:918] validating driver "docker" against &{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 08:55:20.853275 1653676 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 08:55:20.853364 1653676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 08:55:20.899900 1653676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 08:55:20.891412564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 08:55:20.900590 1653676 cni.go:84] Creating CNI manager for ""
	I0804 08:55:20.900687 1653676 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 08:55:20.900743 1653676 start.go:348] cluster config:
	{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 08:55:20.902216 1653676 out.go:177] * Starting "functional-699837" primary control-plane node in "functional-699837" cluster
	I0804 08:55:20.903155 1653676 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 08:55:20.904009 1653676 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 08:55:20.904940 1653676 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 08:55:20.904978 1653676 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 08:55:20.904991 1653676 cache.go:56] Caching tarball of preloaded images
	I0804 08:55:20.905036 1653676 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 08:55:20.905069 1653676 preload.go:172] Found /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 08:55:20.905079 1653676 cache.go:59] Finished verifying existence of preloaded tar for v1.34.0-beta.0 on docker
	I0804 08:55:20.905203 1653676 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/config.json ...
	I0804 08:55:20.923511 1653676 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 08:55:20.923529 1653676 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 08:55:20.923544 1653676 cache.go:230] Successfully downloaded all kic artifacts
	I0804 08:55:20.923577 1653676 start.go:360] acquireMachinesLock for functional-699837: {Name:mkeddb8e244284f14cfc07327f464823de65cf67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 08:55:20.923631 1653676 start.go:364] duration metric: took 36.633µs to acquireMachinesLock for "functional-699837"
	I0804 08:55:20.923647 1653676 start.go:96] Skipping create...Using existing machine configuration
	I0804 08:55:20.923652 1653676 fix.go:54] fixHost starting: 
	I0804 08:55:20.923842 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:20.940410 1653676 fix.go:112] recreateIfNeeded on functional-699837: state=Running err=<nil>
	W0804 08:55:20.940440 1653676 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 08:55:20.942107 1653676 out.go:177] * Updating the running docker "functional-699837" container ...
	I0804 08:55:20.943161 1653676 machine.go:93] provisionDockerMachine start ...
	I0804 08:55:20.943249 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:20.959620 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:20.959871 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:20.959884 1653676 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 08:55:21.080396 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-699837
	
	I0804 08:55:21.080433 1653676 ubuntu.go:169] provisioning hostname "functional-699837"
	I0804 08:55:21.080500 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.097426 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.097649 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.097666 1653676 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-699837 && echo "functional-699837" | sudo tee /etc/hostname
	I0804 08:55:21.227825 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-699837
	
	I0804 08:55:21.227926 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.246066 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.246278 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.246294 1653676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-699837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-699837/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-699837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 08:55:21.373154 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 08:55:21.373185 1653676 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 08:55:21.373228 1653676 ubuntu.go:177] setting up certificates
	I0804 08:55:21.373273 1653676 provision.go:84] configureAuth start
	I0804 08:55:21.373335 1653676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-699837
	I0804 08:55:21.390471 1653676 provision.go:143] copyHostCerts
	I0804 08:55:21.390507 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 08:55:21.390548 1653676 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 08:55:21.390558 1653676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 08:55:21.390632 1653676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 08:55:21.390734 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 08:55:21.390760 1653676 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 08:55:21.390767 1653676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 08:55:21.390803 1653676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 08:55:21.390876 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 08:55:21.390902 1653676 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 08:55:21.390914 1653676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 08:55:21.390947 1653676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 08:55:21.391030 1653676 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.functional-699837 san=[127.0.0.1 192.168.49.2 functional-699837 localhost minikube]
	I0804 08:55:21.573518 1653676 provision.go:177] copyRemoteCerts
	I0804 08:55:21.573582 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 08:55:21.573618 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.591269 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:21.681513 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 08:55:21.681585 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 08:55:21.702708 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 08:55:21.702758 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 08:55:21.723583 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 08:55:21.723630 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 08:55:21.744569 1653676 provision.go:87] duration metric: took 371.27679ms to configureAuth
	I0804 08:55:21.744602 1653676 ubuntu.go:193] setting minikube options for container-runtime
	I0804 08:55:21.744799 1653676 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 08:55:21.744861 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.762017 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.762244 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.762255 1653676 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 08:55:21.889470 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 08:55:21.889494 1653676 ubuntu.go:71] root file system type: overlay
	I0804 08:55:21.889614 1653676 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 08:55:21.889686 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:21.906485 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:21.906734 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:21.906827 1653676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 08:55:22.043972 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 08:55:22.044042 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.061528 1653676 main.go:141] libmachine: Using SSH client type: native
	I0804 08:55:22.061801 1653676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 08:55:22.061820 1653676 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 08:55:22.189999 1653676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 08:55:22.190024 1653676 machine.go:96] duration metric: took 1.246850112s to provisionDockerMachine
	I0804 08:55:22.190035 1653676 start.go:293] postStartSetup for "functional-699837" (driver="docker")
	I0804 08:55:22.190046 1653676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 08:55:22.190105 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 08:55:22.190157 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.207121 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.297799 1653676 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 08:55:22.300559 1653676 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.5 LTS"
	I0804 08:55:22.300580 1653676 command_runner.go:130] > NAME="Ubuntu"
	I0804 08:55:22.300588 1653676 command_runner.go:130] > VERSION_ID="22.04"
	I0804 08:55:22.300596 1653676 command_runner.go:130] > VERSION="22.04.5 LTS (Jammy Jellyfish)"
	I0804 08:55:22.300602 1653676 command_runner.go:130] > VERSION_CODENAME=jammy
	I0804 08:55:22.300608 1653676 command_runner.go:130] > ID=ubuntu
	I0804 08:55:22.300614 1653676 command_runner.go:130] > ID_LIKE=debian
	I0804 08:55:22.300622 1653676 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0804 08:55:22.300634 1653676 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0804 08:55:22.300652 1653676 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0804 08:55:22.300662 1653676 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0804 08:55:22.300667 1653676 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0804 08:55:22.300719 1653676 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 08:55:22.300753 1653676 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 08:55:22.300768 1653676 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 08:55:22.300780 1653676 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 08:55:22.300795 1653676 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 08:55:22.300857 1653676 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 08:55:22.300964 1653676 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 08:55:22.300977 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> /etc/ssl/certs/15826902.pem
	I0804 08:55:22.301064 1653676 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts -> hosts in /etc/test/nested/copy/1582690
	I0804 08:55:22.301073 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts -> /etc/test/nested/copy/1582690/hosts
	I0804 08:55:22.301115 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1582690
	I0804 08:55:22.308734 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 08:55:22.329778 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts --> /etc/test/nested/copy/1582690/hosts (40 bytes)
	I0804 08:55:22.350435 1653676 start.go:296] duration metric: took 160.385758ms for postStartSetup
	I0804 08:55:22.350534 1653676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 08:55:22.350588 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.367129 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.453443 1653676 command_runner.go:130] > 33%
	I0804 08:55:22.453718 1653676 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 08:55:22.457863 1653676 command_runner.go:130] > 197G
	I0804 08:55:22.457888 1653676 fix.go:56] duration metric: took 1.534232726s for fixHost
	I0804 08:55:22.457898 1653676 start.go:83] releasing machines lock for "functional-699837", held for 1.534258328s
	I0804 08:55:22.457964 1653676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-699837
	I0804 08:55:22.474710 1653676 ssh_runner.go:195] Run: cat /version.json
	I0804 08:55:22.474768 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.474834 1653676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 08:55:22.474905 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:22.492489 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.492983 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:22.576302 1653676 command_runner.go:130] > {"iso_version": "v1.36.0-1753487480-21147", "kicbase_version": "v0.0.47-1753871403-21198", "minikube_version": "v1.36.0", "commit": "69470231e9abd2d11a84a83b271e426458d5d12f"}
	I0804 08:55:22.576422 1653676 ssh_runner.go:195] Run: systemctl --version
	I0804 08:55:22.653754 1653676 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0804 08:55:22.655827 1653676 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.16)
	I0804 08:55:22.655870 1653676 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0804 08:55:22.655949 1653676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 08:55:22.659872 1653676 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0804 08:55:22.659895 1653676 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I0804 08:55:22.659905 1653676 command_runner.go:130] > Device: 37h/55d	Inode: 822247      Links: 1
	I0804 08:55:22.659914 1653676 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0804 08:55:22.659929 1653676 command_runner.go:130] > Access: 2025-08-04 08:46:48.521872821 +0000
	I0804 08:55:22.659937 1653676 command_runner.go:130] > Modify: 2025-08-04 08:46:48.497871149 +0000
	I0804 08:55:22.659947 1653676 command_runner.go:130] > Change: 2025-08-04 08:46:48.497871149 +0000
	I0804 08:55:22.659959 1653676 command_runner.go:130] >  Birth: 2025-08-04 08:46:48.497871149 +0000
	I0804 08:55:22.660164 1653676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 08:55:22.676431 1653676 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 08:55:22.676489 1653676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 08:55:22.683904 1653676 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 08:55:22.683925 1653676 start.go:495] detecting cgroup driver to use...
	I0804 08:55:22.683957 1653676 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 08:55:22.684079 1653676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 08:55:22.696848 1653676 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0804 08:55:22.698010 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:23.084233 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 08:55:23.094208 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 08:55:23.103030 1653676 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 08:55:23.103076 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 08:55:23.111645 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 08:55:23.120216 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 08:55:23.128524 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 08:55:23.137020 1653676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 08:55:23.144932 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 08:55:23.153318 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 08:55:23.161730 1653676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 08:55:23.170124 1653676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 08:55:23.176419 1653676 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0804 08:55:23.177058 1653676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 08:55:23.184211 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:23.265466 1653676 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 08:55:23.467281 1653676 start.go:495] detecting cgroup driver to use...
	I0804 08:55:23.467337 1653676 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 08:55:23.467388 1653676 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 08:55:23.477772 1653676 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0804 08:55:23.477865 1653676 command_runner.go:130] > [Unit]
	I0804 08:55:23.477892 1653676 command_runner.go:130] > Description=Docker Application Container Engine
	I0804 08:55:23.477904 1653676 command_runner.go:130] > Documentation=https://docs.docker.com
	I0804 08:55:23.477912 1653676 command_runner.go:130] > BindsTo=containerd.service
	I0804 08:55:23.477924 1653676 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0804 08:55:23.477935 1653676 command_runner.go:130] > Wants=network-online.target
	I0804 08:55:23.477942 1653676 command_runner.go:130] > Requires=docker.socket
	I0804 08:55:23.477950 1653676 command_runner.go:130] > StartLimitBurst=3
	I0804 08:55:23.477958 1653676 command_runner.go:130] > StartLimitIntervalSec=60
	I0804 08:55:23.477963 1653676 command_runner.go:130] > [Service]
	I0804 08:55:23.477971 1653676 command_runner.go:130] > Type=notify
	I0804 08:55:23.477977 1653676 command_runner.go:130] > Restart=on-failure
	I0804 08:55:23.477992 1653676 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0804 08:55:23.478010 1653676 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0804 08:55:23.478023 1653676 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0804 08:55:23.478048 1653676 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0804 08:55:23.478062 1653676 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0804 08:55:23.478073 1653676 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0804 08:55:23.478088 1653676 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0804 08:55:23.478104 1653676 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0804 08:55:23.478125 1653676 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0804 08:55:23.478140 1653676 command_runner.go:130] > ExecStart=
	I0804 08:55:23.478162 1653676 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0804 08:55:23.478451 1653676 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0804 08:55:23.478489 1653676 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0804 08:55:23.478505 1653676 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0804 08:55:23.478520 1653676 command_runner.go:130] > LimitNOFILE=infinity
	I0804 08:55:23.478529 1653676 command_runner.go:130] > LimitNPROC=infinity
	I0804 08:55:23.478536 1653676 command_runner.go:130] > LimitCORE=infinity
	I0804 08:55:23.478544 1653676 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0804 08:55:23.478559 1653676 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0804 08:55:23.478570 1653676 command_runner.go:130] > TasksMax=infinity
	I0804 08:55:23.478576 1653676 command_runner.go:130] > TimeoutStartSec=0
	I0804 08:55:23.478586 1653676 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0804 08:55:23.478592 1653676 command_runner.go:130] > Delegate=yes
	I0804 08:55:23.478606 1653676 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0804 08:55:23.478612 1653676 command_runner.go:130] > KillMode=process
	I0804 08:55:23.478659 1653676 command_runner.go:130] > [Install]
	I0804 08:55:23.478680 1653676 command_runner.go:130] > WantedBy=multi-user.target
	I0804 08:55:23.480586 1653676 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 08:55:23.480654 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 08:55:23.491375 1653676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 08:55:23.505761 1653676 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0804 08:55:23.506806 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:23.923432 1653676 ssh_runner.go:195] Run: which cri-dockerd
	I0804 08:55:23.926961 1653676 command_runner.go:130] > /usr/bin/cri-dockerd
	I0804 08:55:23.927156 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 08:55:23.935149 1653676 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 08:55:23.950832 1653676 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 08:55:24.042992 1653676 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 08:55:24.297851 1653676 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 08:55:24.297998 1653676 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 08:55:24.377001 1653676 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 08:55:24.388783 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:24.510366 1653676 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 08:55:24.982429 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 08:55:24.992600 1653676 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0804 08:55:25.006985 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 08:55:25.016432 1653676 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 08:55:25.099651 1653676 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 08:55:25.175485 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:25.251241 1653676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 08:55:25.263161 1653676 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 08:55:25.272497 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:25.348098 1653676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 08:55:25.408736 1653676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 08:55:25.419584 1653676 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 08:55:25.419655 1653676 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 08:55:25.422672 1653676 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0804 08:55:25.422693 1653676 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0804 08:55:25.422702 1653676 command_runner.go:130] > Device: 45h/69d	Inode: 1258        Links: 1
	I0804 08:55:25.422711 1653676 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0804 08:55:25.422722 1653676 command_runner.go:130] > Access: 2025-08-04 08:55:25.353889433 +0000
	I0804 08:55:25.422730 1653676 command_runner.go:130] > Modify: 2025-08-04 08:55:25.353889433 +0000
	I0804 08:55:25.422743 1653676 command_runner.go:130] > Change: 2025-08-04 08:55:25.357889711 +0000
	I0804 08:55:25.422749 1653676 command_runner.go:130] >  Birth: -
	I0804 08:55:25.422776 1653676 start.go:563] Will wait 60s for crictl version
	I0804 08:55:25.422814 1653676 ssh_runner.go:195] Run: which crictl
	I0804 08:55:25.425611 1653676 command_runner.go:130] > /usr/bin/crictl
	I0804 08:55:25.425730 1653676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 08:55:25.455697 1653676 command_runner.go:130] > Version:  0.1.0
	I0804 08:55:25.455721 1653676 command_runner.go:130] > RuntimeName:  docker
	I0804 08:55:25.455727 1653676 command_runner.go:130] > RuntimeVersion:  28.3.3
	I0804 08:55:25.455733 1653676 command_runner.go:130] > RuntimeApiVersion:  v1
	I0804 08:55:25.458002 1653676 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 08:55:25.458069 1653676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 08:55:25.480067 1653676 command_runner.go:130] > 28.3.3
	I0804 08:55:25.481564 1653676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 08:55:25.502625 1653676 command_runner.go:130] > 28.3.3
	I0804 08:55:25.506722 1653676 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 08:55:25.506807 1653676 cli_runner.go:164] Run: docker network inspect functional-699837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 08:55:25.523376 1653676 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0804 08:55:25.526929 1653676 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I0804 08:55:25.527043 1653676 kubeadm.go:875] updating cluster {Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 08:55:25.527223 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:25.922076 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:26.309911 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:26.726305 1653676 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 08:55:26.726461 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:27.101061 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:27.477147 1653676 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 08:55:27.859614 1653676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 08:55:27.878541 1653676 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	I0804 08:55:27.878563 1653676 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	I0804 08:55:27.878570 1653676 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	I0804 08:55:27.878580 1653676 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.34.0-beta.0
	I0804 08:55:27.878585 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.6.1-1
	I0804 08:55:27.878590 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.5.21-0
	I0804 08:55:27.878595 1653676 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.12.1
	I0804 08:55:27.878599 1653676 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0804 08:55:27.878603 1653676 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 08:55:27.879821 1653676 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 08:55:27.879847 1653676 docker.go:633] Images already preloaded, skipping extraction
	I0804 08:55:27.879906 1653676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 08:55:27.898058 1653676 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	I0804 08:55:27.898084 1653676 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	I0804 08:55:27.898091 1653676 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	I0804 08:55:27.898095 1653676 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.34.0-beta.0
	I0804 08:55:27.898099 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.6.1-1
	I0804 08:55:27.898103 1653676 command_runner.go:130] > registry.k8s.io/etcd:3.5.21-0
	I0804 08:55:27.898109 1653676 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.12.1
	I0804 08:55:27.898113 1653676 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0804 08:55:27.898117 1653676 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 08:55:27.898143 1653676 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 08:55:27.898167 1653676 cache_images.go:85] Images are preloaded, skipping loading
	I0804 08:55:27.898180 1653676 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0-beta.0 docker true true} ...
	I0804 08:55:27.898290 1653676 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-699837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 08:55:27.898340 1653676 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 08:55:27.944494 1653676 command_runner.go:130] > cgroupfs
	I0804 08:55:27.946023 1653676 cni.go:84] Creating CNI manager for ""
	I0804 08:55:27.946045 1653676 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 08:55:27.946061 1653676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 08:55:27.946082 1653676 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-699837 NodeName:functional-699837 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 08:55:27.946247 1653676 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-699837"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 08:55:27.946320 1653676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 08:55:27.953892 1653676 command_runner.go:130] > kubeadm
	I0804 08:55:27.953910 1653676 command_runner.go:130] > kubectl
	I0804 08:55:27.953915 1653676 command_runner.go:130] > kubelet
	I0804 08:55:27.954677 1653676 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 08:55:27.954730 1653676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 08:55:27.962553 1653676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0804 08:55:27.978365 1653676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 08:55:27.994068 1653676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0804 08:55:28.009976 1653676 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0804 08:55:28.013276 1653676 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0804 08:55:28.013353 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:28.101449 1653676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 08:55:28.112250 1653676 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837 for IP: 192.168.49.2
	I0804 08:55:28.112270 1653676 certs.go:194] generating shared ca certs ...
	I0804 08:55:28.112291 1653676 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.112464 1653676 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 08:55:28.112506 1653676 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 08:55:28.112516 1653676 certs.go:256] generating profile certs ...
	I0804 08:55:28.112631 1653676 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.key
	I0804 08:55:28.112686 1653676 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key.5971bdc2
	I0804 08:55:28.112722 1653676 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key
	I0804 08:55:28.112733 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 08:55:28.112747 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 08:55:28.112759 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 08:55:28.112772 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 08:55:28.112783 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 08:55:28.112795 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 08:55:28.112808 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 08:55:28.112819 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 08:55:28.112866 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 08:55:28.112898 1653676 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 08:55:28.112907 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 08:55:28.112929 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 08:55:28.112954 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 08:55:28.112975 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 08:55:28.113011 1653676 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 08:55:28.113036 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.113051 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.113068 1653676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem -> /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.113660 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 08:55:28.135009 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 08:55:28.155784 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 08:55:28.176520 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 08:55:28.197558 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 08:55:28.218349 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 08:55:28.239391 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 08:55:28.259973 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 08:55:28.280899 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 08:55:28.301872 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 08:55:28.322816 1653676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 08:55:28.343561 1653676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 08:55:28.359122 1653676 ssh_runner.go:195] Run: openssl version
	I0804 08:55:28.363884 1653676 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0804 08:55:28.364128 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 08:55:28.372266 1653676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.375320 1653676 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.375365 1653676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.375402 1653676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 08:55:28.381281 1653676 command_runner.go:130] > b5213941
	I0804 08:55:28.381530 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 08:55:28.388997 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 08:55:28.397048 1653676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.399946 1653676 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.399991 1653676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.400016 1653676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 08:55:28.406052 1653676 command_runner.go:130] > 51391683
	I0804 08:55:28.406304 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 08:55:28.413987 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 08:55:28.422286 1653676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.425317 1653676 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.425349 1653676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.425376 1653676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 08:55:28.431562 1653676 command_runner.go:130] > 3ec20f2e
	I0804 08:55:28.431844 1653676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 08:55:28.439543 1653676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 08:55:28.442556 1653676 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 08:55:28.442581 1653676 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0804 08:55:28.442590 1653676 command_runner.go:130] > Device: 801h/2049d	Inode: 822354      Links: 1
	I0804 08:55:28.442597 1653676 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0804 08:55:28.442603 1653676 command_runner.go:130] > Access: 2025-08-04 08:51:18.188665144 +0000
	I0804 08:55:28.442607 1653676 command_runner.go:130] > Modify: 2025-08-04 08:47:12.683556584 +0000
	I0804 08:55:28.442614 1653676 command_runner.go:130] > Change: 2025-08-04 08:47:12.683556584 +0000
	I0804 08:55:28.442619 1653676 command_runner.go:130] >  Birth: 2025-08-04 08:47:12.683556584 +0000
	I0804 08:55:28.442691 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 08:55:28.448546 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.448806 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 08:55:28.454608 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.454889 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 08:55:28.460580 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.460805 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 08:55:28.466615 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.466839 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 08:55:28.472661 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.472705 1653676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 08:55:28.478445 1653676 command_runner.go:130] > Certificate will not expire
	I0804 08:55:28.478508 1653676 kubeadm.go:392] StartCluster: {Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 08:55:28.478619 1653676 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 08:55:28.496419 1653676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 08:55:28.503804 1653676 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0804 08:55:28.503825 1653676 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0804 08:55:28.503833 1653676 command_runner.go:130] > /var/lib/minikube/etcd:
	I0804 08:55:28.504531 1653676 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 08:55:28.504546 1653676 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0804 08:55:28.504584 1653676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 08:55:28.511980 1653676 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 08:55:28.512384 1653676 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-699837" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.512513 1653676 kubeconfig.go:62] /home/jenkins/minikube-integration/21223-1578987/kubeconfig needs updating (will repair): [kubeconfig missing "functional-699837" cluster setting kubeconfig missing "functional-699837" context setting]
	I0804 08:55:28.512791 1653676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.513199 1653676 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.513384 1653676 kapi.go:59] client config for functional-699837: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt", KeyFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.key", CAFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2595680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0804 08:55:28.513811 1653676 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0804 08:55:28.513826 1653676 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0804 08:55:28.513833 1653676 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0804 08:55:28.513839 1653676 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0804 08:55:28.513844 1653676 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0804 08:55:28.513876 1653676 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0804 08:55:28.514257 1653676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 08:55:28.521605 1653676 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0804 08:55:28.521634 1653676 kubeadm.go:593] duration metric: took 17.082556ms to restartPrimaryControlPlane
	I0804 08:55:28.521645 1653676 kubeadm.go:394] duration metric: took 43.142663ms to StartCluster
	I0804 08:55:28.521666 1653676 settings.go:142] acquiring lock: {Name:mk3d97f9903fe59355ed92bb92489c9b9834574a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.521736 1653676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.522230 1653676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:55:28.522435 1653676 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 08:55:28.522512 1653676 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 08:55:28.522651 1653676 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 08:55:28.522656 1653676 addons.go:69] Setting storage-provisioner=true in profile "functional-699837"
	I0804 08:55:28.522728 1653676 addons.go:238] Setting addon storage-provisioner=true in "functional-699837"
	I0804 08:55:28.522681 1653676 addons.go:69] Setting default-storageclass=true in profile "functional-699837"
	I0804 08:55:28.522800 1653676 host.go:66] Checking if "functional-699837" exists ...
	I0804 08:55:28.522810 1653676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-699837"
	I0804 08:55:28.523050 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:28.523236 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:28.524415 1653676 out.go:177] * Verifying Kubernetes components...
	I0804 08:55:28.525459 1653676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 08:55:28.542729 1653676 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:55:28.542941 1653676 kapi.go:59] client config for functional-699837: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt", KeyFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.key", CAFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2595680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0804 08:55:28.543225 1653676 addons.go:238] Setting addon default-storageclass=true in "functional-699837"
	I0804 08:55:28.543255 1653676 host.go:66] Checking if "functional-699837" exists ...
	I0804 08:55:28.543552 1653676 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 08:55:28.543853 1653676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 08:55:28.545053 1653676 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:28.545072 1653676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 08:55:28.545126 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:28.560950 1653676 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:28.560976 1653676 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 08:55:28.561028 1653676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 08:55:28.561396 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:28.582841 1653676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 08:55:28.617980 1653676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 08:55:28.628515 1653676 node_ready.go:35] waiting up to 6m0s for node "functional-699837" to be "Ready" ...
	I0804 08:55:28.628655 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:28.628715 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:28.628984 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:28.669259 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:28.681042 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:28.723292 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:28.723334 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.723359 1653676 retry.go:31] will retry after 184.647945ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.732373 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:28.732422 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.732443 1653676 retry.go:31] will retry after 304.201438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.908717 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:28.958881 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:28.958925 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:28.958945 1653676 retry.go:31] will retry after 476.117899ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.037179 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:29.088413 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:29.088468 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.088491 1653676 retry.go:31] will retry after 197.264107ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.129639 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:29.129716 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:29.130032 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:29.286304 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:29.334473 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:29.337029 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.337065 1653676 retry.go:31] will retry after 823.238005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.435237 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:29.482679 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:29.485403 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.485436 1653676 retry.go:31] will retry after 800.644745ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:29.629726 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:29.629799 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:29.630104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:30.128837 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:30.128917 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:30.129285 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:30.161434 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:30.213167 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.213231 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.213275 1653676 retry.go:31] will retry after 656.353253ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.286342 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:30.334470 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.336981 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.337012 1653676 retry.go:31] will retry after 508.253019ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.629489 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:30.629586 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:30.629950 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:30.630017 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:30.845486 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:30.869953 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:30.897779 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.897836 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.897862 1653676 retry.go:31] will retry after 1.094600532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.922225 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:30.922291 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:30.922314 1653676 retry.go:31] will retry after 805.303636ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:31.129681 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:31.129760 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:31.130110 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:31.628691 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:31.628775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:31.629122 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:31.728325 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:31.779677 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:31.779728 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:31.779748 1653676 retry.go:31] will retry after 2.236258385s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:31.993064 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:32.044458 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:32.044511 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:32.044552 1653676 retry.go:31] will retry after 1.503507165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:32.129706 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:32.129775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:32.130079 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:32.629732 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:32.629813 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:32.630171 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:32.630256 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:33.128768 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:33.128853 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:33.129210 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:33.548844 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:33.599998 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:33.600058 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:33.600081 1653676 retry.go:31] will retry after 1.994543648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:33.629251 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:33.629339 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:33.629634 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:34.017206 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:34.068508 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:34.068573 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:34.068597 1653676 retry.go:31] will retry after 3.823609715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:34.128678 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:34.128751 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:34.129067 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:34.629688 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:34.629764 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:34.630098 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:35.129721 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:35.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:35.130115 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:35.130189 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:35.595749 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:35.629120 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:35.629209 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:35.629582 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:35.645323 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:35.647845 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:35.647880 1653676 retry.go:31] will retry after 3.559085278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:36.129701 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:36.129780 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:36.130117 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:36.628869 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:36.628953 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:36.629336 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:37.129085 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:37.129171 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:37.129515 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:37.629335 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:37.629411 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:37.629704 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:37.629765 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:37.893118 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:37.941760 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:37.944423 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:37.944452 1653676 retry.go:31] will retry after 4.996473933s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:38.128782 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:38.128878 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:38.129260 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:38.628699 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:38.628786 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:38.629112 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:39.128699 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:39.128786 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:39.129139 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:39.207320 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:39.257569 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:39.257615 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:39.257640 1653676 retry.go:31] will retry after 8.124151658s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:39.629122 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:39.629208 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:39.629537 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:40.129218 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:40.129325 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:40.129628 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:40.129693 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:40.629297 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:40.629368 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:40.629673 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:41.129406 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:41.129495 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:41.129887 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:41.629498 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:41.629579 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:41.629928 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:42.129549 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:42.129645 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:42.130002 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:42.130063 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:42.629629 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:42.629709 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:42.630062 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:42.941490 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:42.990741 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:42.993232 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:42.993279 1653676 retry.go:31] will retry after 4.825851231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:43.129602 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:43.129690 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:43.130065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:43.628834 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:43.628909 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:43.629270 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:44.129025 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:44.129120 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:44.129526 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:44.629359 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:44.629431 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:44.629737 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:44.629803 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:45.129549 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:45.129627 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:45.129961 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:45.628704 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:45.628789 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:45.629130 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:46.128858 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:46.128936 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:46.129295 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:46.629013 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:46.629096 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:46.629444 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:47.129179 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:47.129266 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:47.129609 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:47.129674 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:47.381978 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:47.430195 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:47.433093 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:47.433123 1653676 retry.go:31] will retry after 10.012002454s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:47.629500 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:47.629573 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:47.629910 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:47.820313 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:55:47.870430 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:55:47.870476 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:47.870493 1653676 retry.go:31] will retry after 10.075489679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:55:48.128804 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:48.128895 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:48.129267 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:48.629030 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:48.629141 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:48.629503 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:49.129320 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:49.129409 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:49.129785 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:49.129864 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:49.629600 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:49.629674 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:49.629992 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:50.128745 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:50.128835 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:50.129191 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:50.628937 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:50.629015 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:50.629395 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:51.128731 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:51.128818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:51.129169 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:51.628936 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:51.629009 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:51.629384 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:51.629473 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:52.129137 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:52.129221 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:52.129575 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:52.629361 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:52.629431 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:52.629735 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:53.129540 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:53.129620 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:53.129949 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:53.628671 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:53.628747 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:53.629071 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:54.128801 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:54.128899 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:54.129261 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:55:54.129334 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:55:54.629005 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:54.629105 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:54.629481 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:55.129371 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:55.129447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:55.129804 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:55.629597 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:55.629674 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:55.630007 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:55:56.128707 1653676 type.go:168] "Request Body" body=""
	I0804 08:55:56.128802 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:55:57.445382 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:55:57.946208 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:56:06.129570 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10000
	W0804 08:56:06.129644 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:56:06.129694 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:06.129736 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:16.130254 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10000
	W0804 08:56:16.130338 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:56:16.130408 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:16.130480 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:16.262782 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=132
	I0804 08:56:17.263910 1653676 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8441/api/v1/nodes/functional-699837"
	I0804 08:56:17.264149 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:17.264472 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:17.264610 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:17.264716 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:17.264973 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:17.267370 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38248->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267420 1653676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (19.822003727s)
	W0804 08:56:17.267450 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38248->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267470 1653676 retry.go:31] will retry after 18.146841122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38248->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267784 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38252->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267815 1653676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (19.321577292s)
	W0804 08:56:17.267836 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38252->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.267852 1653676 retry.go:31] will retry after 19.077492147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:38252->[::1]:8441: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:17.629331 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:17.629410 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:17.629777 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:18.129400 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:18.129489 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:18.129796 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:18.629536 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:18.629618 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:18.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:18.630021 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:19.129659 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:19.129746 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:19.130112 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:19.628758 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:19.628835 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:19.629178 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:20.128732 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:20.128806 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:20.129156 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:20.628674 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:20.628755 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:20.629081 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:21.128792 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:21.128867 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:21.129234 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:21.129324 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:21.629020 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:21.629101 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:21.629489 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:22.129299 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:22.129389 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:22.129751 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:22.629584 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:22.629664 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:22.629996 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:23.128722 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:23.128828 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:23.129192 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:23.628966 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:23.629055 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:23.629374 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:23.629437 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:24.129128 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:24.129225 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:24.129600 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:24.629381 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:24.629467 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:24.629838 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:25.129635 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:25.129755 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:25.130108 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:25.628815 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:25.628905 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:25.629282 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:26.128941 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:26.129024 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:26.129386 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:26.129469 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:26.629153 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:26.629266 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:26.629626 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:27.129444 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:27.129526 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:27.129867 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:27.629658 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:27.629737 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:27.630140 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:28.128857 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:28.128947 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:28.129307 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:28.629734 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:28.629837 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:28.630240 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:28.630338 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:29.129055 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:29.129168 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:29.129536 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:29.629363 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:29.629443 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:29.629791 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:30.129636 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:30.129710 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:30.130048 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:30.628774 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:30.628849 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:30.629212 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:31.128887 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:31.128984 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:31.129358 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:31.129426 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:31.629089 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:31.629164 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:31.629502 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:32.129335 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:32.129440 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:32.129852 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:32.629638 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:32.629720 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:32.630056 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:33.128794 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:33.128882 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:33.129261 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:33.628999 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:33.629072 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:33.629432 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:33.629497 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:34.129184 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:34.129308 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:34.129684 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:34.629474 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:34.629546 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:34.629872 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:35.129661 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:35.129748 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:35.130119 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:35.414447 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:56:35.463330 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:35.466231 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:35.466267 1653676 retry.go:31] will retry after 13.873476046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:35.629483 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:35.629558 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:35.629897 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:35.629960 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:36.129639 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:36.129713 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:36.130046 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:36.346375 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:56:36.394439 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:36.396962 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:36.396996 1653676 retry.go:31] will retry after 20.764306788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:36.629373 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:36.629461 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:36.629797 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:37.129619 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:37.129700 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:37.130049 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:37.628786 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:37.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:37.629214 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:38.129001 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:38.129075 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:38.129435 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:38.129504 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:38.629094 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:38.629186 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:38.629537 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:39.129329 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:39.129403 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:39.129733 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:39.629535 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:39.629607 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:39.629940 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:40.129719 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:40.129801 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:40.130145 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:40.130216 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:40.628884 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:40.628964 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:40.629317 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:41.128956 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:41.129035 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:41.129355 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:41.629076 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:41.629150 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:41.629485 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:42.129286 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:42.129362 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:42.129691 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:42.629456 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:42.629537 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:42.629869 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:42.629938 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:43.129673 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:43.129756 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:43.130100 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:43.628809 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:43.628889 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:43.629208 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:44.128939 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:44.129019 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:44.129378 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:44.629097 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:44.629182 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:44.629521 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:45.129310 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:45.129387 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:45.129760 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:45.129832 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:45.629562 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:45.629633 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:45.630029 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:46.128691 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:46.128772 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:46.129112 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:46.628845 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:46.628920 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:46.629291 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:47.129029 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:47.129126 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:47.129500 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:47.629337 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:47.629420 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:47.629741 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:47.629802 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:48.129626 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:48.129722 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:48.130077 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:48.628742 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:48.628836 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:48.629189 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:49.128743 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:49.128827 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:49.129185 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:49.340493 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:56:49.391267 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:49.391322 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:49.391344 1653676 retry.go:31] will retry after 22.530122873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 08:56:49.629701 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:49.629775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:49.630094 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:49.630167 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:50.128781 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:50.128853 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:50.129231 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:50.628838 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:50.628912 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:50.629276 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:51.129234 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:51.129318 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:51.129637 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:51.629350 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:51.629441 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:51.629759 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:52.129549 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:52.129656 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:52.129995 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:52.130058 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:52.628710 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:52.628778 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:52.629090 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:53.128873 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:53.128994 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:53.129417 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:53.629155 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:53.629225 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:53.629551 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:54.129336 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:54.129409 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:54.129789 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:54.629582 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:54.629657 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:54.629978 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:54.630042 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:55.128737 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:55.128827 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:55.129209 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:55.629562 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:55.629630 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:55.629995 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:56.129718 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:56.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:56.130127 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:56.628839 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:56.628957 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:56.629326 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:57.129049 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:57.129165 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:57.129545 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:57.129614 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:57.161690 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 08:56:57.212094 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:57.212172 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:56:57.212321 1653676 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 08:56:57.629703 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:57.629786 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:57.630137 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:58.128910 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:58.128986 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:58.129348 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:58.629128 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:58.629212 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:58.629557 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:56:59.129348 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:59.129423 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:59.129768 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:56:59.129831 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:56:59.629552 1653676 type.go:168] "Request Body" body=""
	I0804 08:56:59.629630 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:56:59.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:00.128668 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:00.128748 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:00.129104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:00.628883 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:00.628972 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:00.629344 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:01.128990 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:01.129091 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:01.129447 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:01.629187 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:01.629284 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:01.629625 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:01.629697 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:02.129438 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:02.129511 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:02.129847 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:02.629620 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:02.629714 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:02.630041 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:03.128760 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:03.128862 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:03.129196 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:03.628968 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:03.629065 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:03.629415 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:04.129145 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:04.129220 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:04.129570 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:04.129643 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:04.629351 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:04.629445 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:04.629746 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:05.129583 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:05.129661 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:05.129993 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:05.628708 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:05.628794 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:05.629079 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:06.128832 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:06.128925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:06.129318 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:06.629043 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:06.629138 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:06.629480 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:06.629558 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:07.129326 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:07.129425 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:07.129785 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:07.629601 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:07.629694 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:07.630065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:08.128801 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:08.128909 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:08.129315 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:08.629044 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:08.629145 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:08.629528 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:08.629593 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:09.129358 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:09.129453 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:09.129910 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:09.629675 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:09.629754 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:09.630073 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:10.128808 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:10.128885 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:10.129234 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:10.628993 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:10.629089 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:10.629434 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:11.129231 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:11.129347 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:11.129707 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:11.129770 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:11.629527 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:11.629607 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:11.629894 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:11.922305 1653676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 08:57:11.970691 1653676 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:57:11.973096 1653676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 08:57:11.973263 1653676 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 08:57:11.975142 1653676 out.go:177] * Enabled addons: 
	I0804 08:57:11.976503 1653676 addons.go:514] duration metric: took 1m43.454009966s for enable addons: enabled=[]
	I0804 08:57:12.129480 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:12.129579 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:12.129915 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:12.629535 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:12.629640 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:12.629960 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:13.129603 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:13.129676 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:13.130018 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:13.130084 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:13.629651 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:13.629730 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:13.630028 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:14.129674 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:14.129818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:14.130187 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:14.628738 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:14.628810 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:14.629106 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:15.128681 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:15.128756 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:15.129116 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:15.628700 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:15.628781 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:15.629089 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:15.629155 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:16.128845 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:16.128921 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:16.129302 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:16.628840 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:16.628918 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:16.629233 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:17.128809 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:17.128893 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:17.129257 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:17.628792 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:17.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:17.629202 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:17.629293 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:18.128759 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:18.128847 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:18.129200 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:18.629041 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:18.629121 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:18.629468 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:19.129039 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:19.129112 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:19.129489 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:19.629035 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:19.629105 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:19.629466 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:19.629532 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:20.129056 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:20.129136 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:20.129527 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:20.629075 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:20.629154 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:20.629482 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:21.129294 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:21.129367 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:21.129717 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:21.629359 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:21.629463 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:21.629764 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:21.629831 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:22.129365 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:22.129439 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:22.129781 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:22.629426 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:22.629501 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:22.629789 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:23.129450 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:23.129535 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:23.129870 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:23.629332 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:23.629418 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:23.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:24.128868 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:24.128960 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:24.129333 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:24.129416 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:24.628863 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:24.628939 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:24.629295 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:25.128809 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:25.128887 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:25.129269 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:25.629006 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:25.629081 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:25.629396 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:26.129192 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:26.129303 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:26.129672 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:26.129741 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:26.629536 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:26.629611 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:26.629914 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:27.129705 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:27.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:27.130156 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:27.628879 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:27.628961 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:27.629280 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:28.129023 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:28.129114 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:28.129510 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:28.629296 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:28.629387 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:28.629697 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:28.629765 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:29.129519 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:29.129613 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:29.129968 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:29.628696 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:29.628770 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:29.629059 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:30.128786 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:30.128880 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:30.129235 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:30.628979 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:30.629054 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:30.629304 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:31.129276 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:31.129363 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:31.129719 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:31.129793 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:31.629528 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:31.629615 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:31.629920 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:32.128690 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:32.128765 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:32.129098 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:32.628838 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:32.628956 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:32.629288 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:33.129003 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:33.129091 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:33.129461 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:33.629193 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:33.629295 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:33.629610 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:33.629682 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:34.129449 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:34.129539 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:34.129898 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:34.629687 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:34.629766 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:34.630068 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:35.128782 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:35.128868 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:35.129222 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:35.628979 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:35.629051 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:35.629387 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:36.129189 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:36.129297 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:36.129671 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:36.129763 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:36.629508 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:36.629584 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:36.629873 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:37.129696 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:37.129776 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:37.130132 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:37.628857 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:37.628938 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:37.629221 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:38.128990 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:38.129078 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:38.129487 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:38.629184 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:38.629289 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:38.629594 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:38.629667 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:39.129364 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:39.129441 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:39.129810 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:39.629603 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:39.629674 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:39.629968 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:40.128718 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:40.128797 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:40.129178 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:40.628945 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:40.629021 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:40.629364 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:41.129136 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:41.129253 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:41.129612 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:41.129682 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:41.629452 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:41.629530 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:41.629831 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:42.129618 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:42.129707 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:42.130079 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:42.628760 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:42.628838 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:42.629155 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:43.128868 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:43.128970 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:43.129365 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:43.629090 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:43.629163 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:43.629503 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:43.629565 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:44.129335 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:44.129433 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:44.129785 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:44.629577 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:44.629649 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:44.629949 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:45.128664 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:45.128759 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:45.129131 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:45.628854 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:45.628932 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:45.629229 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:46.128970 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:46.129047 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:46.129442 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:46.129517 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:46.629268 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:46.629344 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:46.629668 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:47.129457 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:47.129529 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:47.129867 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:47.629659 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:47.629734 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:47.630045 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:48.128764 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:48.128839 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:48.129183 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:48.628996 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:48.629085 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:48.629417 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:48.629493 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:49.129179 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:49.129288 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:49.129668 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:49.629441 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:49.629513 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:49.629806 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:50.129603 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:50.129678 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:50.130019 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:50.628730 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:50.628803 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:50.629119 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:51.128835 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:51.128916 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:51.129293 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:51.129364 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:51.629058 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:51.629136 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:51.629474 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:52.129201 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:52.129298 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:52.129723 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:52.629568 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:52.629654 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:52.630018 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:53.128764 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:53.128844 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:53.129204 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:53.628946 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:53.629019 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:53.629368 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:53.629442 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:54.129146 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:54.129225 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:54.129608 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:54.629341 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:54.629417 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:54.629719 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:55.129545 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:55.129619 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:55.129967 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:55.628701 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:55.628776 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:55.629095 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:56.128809 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:56.128887 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:56.129279 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:56.129347 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:56.629019 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:56.629096 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:56.629435 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:57.129166 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:57.129283 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:57.129655 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:57.629456 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:57.629534 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:57.629859 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:58.129657 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:58.129755 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:58.130109 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:57:58.130182 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:57:58.628778 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:58.628892 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:58.629216 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:59.128942 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:59.129046 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:59.129427 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:57:59.629154 1653676 type.go:168] "Request Body" body=""
	I0804 08:57:59.629257 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:57:59.629579 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:00.129357 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:00.129459 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:00.129797 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:00.629587 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:00.629677 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:00.630022 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:00.630087 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:01.128755 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:01.128831 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:01.129179 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:01.628959 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:01.629054 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:01.629420 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:02.129182 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:02.129295 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:02.129668 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:02.629476 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:02.629572 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:02.629862 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:03.129679 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:03.129759 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:03.130099 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:03.130172 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:03.628846 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:03.628948 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:03.629308 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:04.129055 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:04.129134 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:04.129501 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:04.629285 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:04.629371 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:04.629678 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:05.129485 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:05.129556 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:05.129895 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:05.629689 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:05.629775 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:05.630092 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:05.630166 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:06.128794 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:06.128884 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:06.129262 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:06.628981 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:06.629094 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:06.629442 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:07.129153 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:07.129236 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:07.129612 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:07.629373 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:07.629460 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:07.629767 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:08.129560 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:08.129642 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:08.129999 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:08.130067 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:08.628667 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:08.628761 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:08.629105 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:09.128826 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:09.128902 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:09.129208 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:09.628951 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:09.629038 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:09.629355 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:10.129067 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:10.129144 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:10.129526 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:10.629346 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:10.629440 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:10.629755 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:10.629825 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:11.129536 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:11.129607 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:11.129931 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:11.628656 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:11.628740 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:11.629041 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:12.128773 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:12.128847 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:12.129188 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:12.628944 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:12.629039 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:12.629370 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:13.129112 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:13.129185 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:13.129528 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:13.129601 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:13.628854 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:13.628929 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:13.629262 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:14.129022 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:14.129107 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:14.129456 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:14.629179 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:14.629262 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:14.629560 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:15.129358 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:15.129438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:15.129768 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:15.129842 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:15.629588 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:15.629663 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:15.629993 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:16.128722 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:16.128807 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:16.129155 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:16.628888 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:16.628968 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:16.629289 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:17.128871 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:17.128958 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:17.129331 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:17.629089 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:17.629163 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:17.629498 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:17.629579 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:18.129331 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:18.129413 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:18.129748 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:18.629352 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:18.629431 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:18.629731 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:19.129531 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:19.129601 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:19.129926 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:19.629715 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:19.629793 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:19.630096 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:19.630165 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:20.128817 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:20.128892 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:20.129221 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:20.628986 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:20.629062 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:20.629379 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:21.129140 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:21.129256 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:21.129611 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:21.629346 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:21.629422 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:21.629705 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:22.129503 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:22.129592 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:22.129936 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:22.130013 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:22.628702 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:22.628771 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:22.629065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:23.128773 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:23.128856 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:23.129193 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:23.628915 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:23.629017 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:23.629329 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:24.129041 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:24.129130 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:24.129485 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:24.629265 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:24.629368 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:24.629656 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:24.629721 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:25.129446 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:25.129542 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:25.129838 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:25.629614 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:25.629692 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:25.630005 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:26.128734 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:26.128822 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:26.129143 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:26.628855 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:26.628945 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:26.629295 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:27.129001 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:27.129078 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:27.129430 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:27.129497 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:27.629154 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:27.629226 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:27.629562 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:28.129344 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:28.129447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:28.129769 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:28.629456 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:28.629542 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:28.629856 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:29.129664 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:29.129750 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:29.130110 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:29.130200 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:29.628750 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:29.628825 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:29.629116 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:30.128860 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:30.128943 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:30.129300 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:30.629025 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:30.629107 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:30.629409 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:31.129309 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:31.129383 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:31.129732 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:31.629506 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:31.629578 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:31.629869 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:31.629930 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:32.129669 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:32.129745 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:32.130096 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:32.628810 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:32.628890 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:32.629161 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:33.128895 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:33.128972 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:33.129352 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:33.629078 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:33.629161 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:33.629537 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:34.129351 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:34.129430 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:34.129807 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:34.129887 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:34.629642 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:34.629714 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:34.630028 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:35.128785 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:35.128867 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:35.129207 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:35.628963 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:35.629038 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:35.629350 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:36.129133 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:36.129206 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:36.129495 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:36.629057 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:36.629152 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:36.629476 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:36.629541 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:37.129344 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:37.129435 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:37.129779 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:37.629589 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:37.629665 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:37.629987 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:38.128723 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:38.128818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:38.129170 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:38.628949 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:38.629043 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:38.629367 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:39.129078 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:39.129177 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:39.129555 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:39.129622 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:39.629381 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:39.629467 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:39.629800 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:40.129606 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:40.129705 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:40.130062 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:40.628786 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:40.628889 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:40.629233 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:41.129024 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:41.129100 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:41.129462 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:41.629280 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:41.629379 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:41.629701 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:41.629762 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:42.129521 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:42.129597 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:42.129950 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:42.628667 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:42.628756 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:42.629073 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:43.128819 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:43.128897 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:43.129279 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:43.629033 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:43.629148 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:43.629489 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:44.129324 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:44.129407 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:44.129750 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:44.129816 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:44.629574 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:44.629658 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:44.629972 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:45.128703 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:45.128778 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:45.129125 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:45.628842 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:45.628933 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:45.629252 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:46.128948 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:46.129033 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:46.129380 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:46.629108 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:46.629185 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:46.629520 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:46.629580 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:47.129340 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:47.129419 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:47.129767 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:47.629563 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:47.629638 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:47.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:48.128670 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:48.128751 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:48.129104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:48.629702 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:48.629776 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:48.630085 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:48.630146 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:49.128823 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:49.128899 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:49.129229 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:49.628981 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:49.629065 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:49.629392 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:50.129122 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:50.129198 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:50.129554 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:50.629352 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:50.629447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:50.629788 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:51.129551 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:51.129636 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:51.129966 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:51.130030 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:51.628723 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:51.628822 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:51.629134 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:52.128861 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:52.128966 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:52.129334 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:52.629047 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:52.629124 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:52.629436 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:53.129166 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:53.129271 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:53.129578 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:53.629347 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:53.629425 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:53.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:53.629789 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:54.129531 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:54.129608 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:54.130022 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:54.628732 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:54.628807 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:54.629107 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:55.128818 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:55.128901 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:55.129281 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:55.629003 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:55.629084 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:55.629411 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:56.129310 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:56.129399 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:56.129752 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:56.129817 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:56.629559 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:56.629638 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:56.629927 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:57.129729 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:57.129818 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:57.130192 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:57.628939 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:57.629019 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:57.629349 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:58.129065 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:58.129186 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:58.129616 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:58.629318 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:58.629398 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:58.629699 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:58:58.629757 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:58:59.129513 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:59.129603 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:59.129965 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:58:59.628703 1653676 type.go:168] "Request Body" body=""
	I0804 08:58:59.628781 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:58:59.629083 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:00.128805 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:00.128896 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:00.129279 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:00.629019 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:00.629098 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:00.629464 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:01.129270 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:01.129348 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:01.129717 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:01.129794 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:01.629537 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:01.629608 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:01.629944 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:02.128689 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:02.128769 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:02.129142 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:02.628902 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:02.628987 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:02.629315 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:03.129038 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:03.129117 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:03.129496 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:03.629371 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:03.629457 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:03.629773 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:03.629837 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:04.129591 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:04.129684 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:14.133399 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10003
	W0804 08:59:14.133474 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:59:14.133535 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:14.133571 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:24.134577 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=10000
	W0804 08:59:24.134670 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": net/http: TLS handshake timeout
	I0804 08:59:24.134743 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:24.134791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:24.447100 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=312
	I0804 08:59:25.448003 1653676 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8441/api/v1/nodes/functional-699837"
	I0804 08:59:25.448109 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:25.448371 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:25.448473 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:25.448503 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:25.448708 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:25.629198 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:25.629320 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:25.629693 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:26.129362 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:26.129438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:26.129786 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:26.629562 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:26.629634 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:26.629913 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:26.629981 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:27.129710 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:27.129791 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:27.130145 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:27.628843 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:27.628915 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:27.629211 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:28.128958 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:28.129049 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:28.129414 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:28.629057 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:28.629131 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:28.629437 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:29.129142 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:29.129215 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:29.129570 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:29.129634 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:29.629351 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:29.629434 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:29.629732 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:30.129550 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:30.129627 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:30.129981 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:30.628711 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:30.628785 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:30.629088 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:31.128761 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:31.128837 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:31.129194 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:31.628935 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:31.629013 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:31.629357 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:31.629423 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:32.129102 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:32.129207 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:32.129598 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:32.629343 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:32.629412 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:32.629682 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:33.129483 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:33.129571 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:33.129937 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:33.628685 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:33.628761 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:33.629071 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:34.128794 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:34.128880 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:34.129196 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:34.129292 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:34.628955 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:34.629026 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:34.629332 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:35.129092 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:35.129172 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:35.129540 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:35.629393 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:35.629466 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:35.629788 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:36.129551 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:36.129629 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:36.129981 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:36.130049 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:36.628714 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:36.628796 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:36.629109 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:37.128919 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:37.128993 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:37.129345 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:37.629059 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:37.629147 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:37.629463 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:38.129234 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:38.129326 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:38.129664 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:38.629351 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:38.629432 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:38.629732 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:38.629805 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:39.129576 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:39.129650 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:39.129997 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:39.628740 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:39.628825 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:39.629123 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:40.128863 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:40.128946 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:40.129324 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:40.629061 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:40.629132 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:40.629464 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:41.129329 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:41.129415 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:41.129770 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:41.129836 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:41.629564 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:41.629638 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:41.629926 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:42.129712 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:42.129803 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:42.130147 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:42.628855 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:42.628932 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:42.629230 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:43.128970 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:43.129055 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:43.129407 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:43.629110 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:43.629193 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:43.629549 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:43.629613 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:44.129360 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:44.129442 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:44.129809 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:44.629604 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:44.629695 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:44.629982 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:45.128765 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:45.128844 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:45.129221 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:45.628969 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:45.629067 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:45.629365 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:46.129219 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:46.129334 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:46.129701 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:46.129778 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:46.629522 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:46.629594 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:46.629887 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:47.129668 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:47.129774 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:47.130135 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:47.628848 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:47.628924 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:47.629222 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:48.128974 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:48.129074 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:48.129460 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:48.629189 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:48.629275 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:48.629575 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:48.629637 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:49.129365 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:49.129460 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:49.129826 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:49.629589 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:49.629663 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:49.629948 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:50.128684 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:50.128784 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:50.129153 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:50.628866 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:50.628940 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:50.629236 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:51.128964 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:51.129053 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:51.129443 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:51.129520 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:51.629181 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:51.629285 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:51.629597 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:52.129363 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:52.129439 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:52.129782 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:52.629564 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:52.629637 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:52.629921 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:53.128676 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:53.128760 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:53.129117 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:53.628840 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:53.628925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:53.629216 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:53.629319 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:54.129011 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:54.129119 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:54.129458 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:54.629169 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:54.629255 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:54.629563 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:55.129370 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:55.129456 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:55.129803 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:55.629586 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:55.629656 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:55.629948 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:55.630021 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:56.129716 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:56.129807 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:56.130158 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:56.628872 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:56.628960 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:56.629280 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:57.129030 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:57.129134 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:57.129533 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:57.629322 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:57.629394 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:57.629681 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:58.129475 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:58.129571 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:58.129969 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 08:59:58.130041 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 08:59:58.629691 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:58.629768 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:58.630065 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:59.128782 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:59.128877 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:59.129234 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 08:59:59.628979 1653676 type.go:168] "Request Body" body=""
	I0804 08:59:59.629051 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 08:59:59.629387 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:00.129109 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:00.129205 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:00.129657 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:00.629456 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:00.629529 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:00.629872 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:00.629939 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:01.129658 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:01.129735 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:01.130048 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:01.628777 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:01.628856 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:01.629190 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:02.128935 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:02.129010 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:02.129319 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:02.628797 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:02.628877 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:02.629137 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:03.128821 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:03.128896 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:03.129167 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:03.129224 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:03.628891 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:03.628974 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:03.629299 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:04.129012 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:04.129096 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:04.129462 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:04.629177 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:04.629276 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:04.629597 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:05.129034 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:05.129129 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:05.129588 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:05.129664 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:05.629416 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:05.629491 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:05.629807 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:06.129708 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:06.129798 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:06.130177 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:06.628914 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:06.628986 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:06.629309 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:07.129052 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:07.129152 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:07.129545 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:07.629359 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:07.629447 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:07.629774 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:07.629843 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:08.129619 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:08.129703 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:08.130076 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:08.628794 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:08.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:08.629209 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:09.128966 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:09.129044 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:09.129548 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:09.629398 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:09.629478 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:09.629790 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:10.129602 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:10.129686 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:10.130062 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:10.130134 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:10.628810 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:10.628888 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:10.629214 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:11.128747 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:11.128824 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:11.129152 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:11.628878 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:11.628954 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:11.629286 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:12.129028 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:12.129106 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:12.129473 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:12.629262 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:12.629338 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:12.629618 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:12.629689 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:13.129417 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:13.129501 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:13.129842 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:13.629621 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:13.629693 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:13.629988 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:14.128745 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:14.128832 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:14.129178 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:14.628945 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:14.629017 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:14.629397 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:15.129144 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:15.129234 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:15.129617 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:15.129699 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:15.629451 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:15.629537 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:15.629859 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:16.129648 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:16.129725 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:16.130080 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:16.628842 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:16.628922 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:16.629262 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:17.128979 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:17.129061 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:17.129404 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:17.629119 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:17.629192 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:17.629516 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:17.629592 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:18.129336 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:18.129414 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:18.129755 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:18.629486 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:18.629564 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:18.629881 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:19.129669 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:19.129760 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:19.130101 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:19.628816 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:19.628890 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:19.629175 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:20.128910 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:20.128984 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:20.129330 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:20.129401 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:20.629078 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:20.629168 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:20.629501 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:21.129330 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:21.129424 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:21.129762 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:21.629541 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:21.629617 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:21.629961 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:22.128702 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:22.128777 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:22.129131 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:22.628835 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:22.628922 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:22.629266 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:22.629330 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:23.128997 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:23.129087 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:23.129464 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:23.629182 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:23.629286 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:23.629610 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:24.129357 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:24.129433 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:24.129789 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:24.629580 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:24.629654 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:24.630004 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:24.630071 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:25.128772 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:25.128875 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:25.129222 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:25.628964 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:25.629038 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:25.629409 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:26.129166 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:26.129260 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:26.129614 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:26.629352 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:26.629430 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:26.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:27.129507 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:27.129584 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:27.129930 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:27.129995 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:27.628677 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:27.628763 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:27.629122 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:28.128831 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:28.128925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:28.129213 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:28.629034 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:28.629122 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:28.629430 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:29.129177 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:29.129276 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:29.129670 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:29.629478 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:29.629549 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:29.629842 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:29.629908 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:30.129649 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:30.129723 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:30.130078 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:30.628813 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:30.628886 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:30.629190 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:31.128911 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:31.128986 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:31.129333 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:31.629040 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:31.629132 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:31.629470 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:32.129197 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:32.129290 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:32.129685 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:32.129763 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:32.629496 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:32.629568 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:32.629869 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:33.129687 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:33.129771 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:33.130108 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:33.628818 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:33.628897 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:33.629202 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:34.128946 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:34.129020 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:34.129415 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:34.629147 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:34.629219 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:34.629558 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:34.629628 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:35.129369 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:35.129455 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:35.129805 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:35.629601 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:35.629676 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:35.629982 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:36.128679 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:36.128768 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:36.129121 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:36.628838 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:36.628914 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:36.629211 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:37.128955 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:37.129054 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:37.129433 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:37.129502 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:37.629160 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:37.629260 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:37.629562 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:38.129342 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:38.129438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:38.129787 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:38.629253 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:38.629328 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:38.629641 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:39.129419 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:39.129511 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:39.129853 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:39.129927 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:39.629656 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:39.629726 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:39.630015 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:40.128736 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:40.128824 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:40.129162 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:40.628753 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:40.628832 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:40.629116 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:41.128932 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:41.129010 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:41.129303 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:41.629089 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:41.629196 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:41.629513 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:41.629580 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:42.129349 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:42.129434 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:42.129769 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:42.629554 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:42.629629 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:42.629873 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:43.129642 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:43.129720 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:43.130046 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:43.628744 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:43.628817 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:43.629115 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:44.128831 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:44.128907 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:44.129297 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:44.129364 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:44.629025 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:44.629100 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:44.629418 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:45.129142 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:45.129218 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:45.129572 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:45.629352 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:45.629425 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:45.629726 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:46.129360 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:46.129445 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:46.129788 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:46.129856 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:46.629588 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:46.629667 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:46.629948 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:47.128666 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:47.128744 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:47.129078 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:47.628771 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:47.628847 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:47.629196 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:48.128923 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:48.129000 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:48.129363 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:48.629072 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:48.629151 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:48.629471 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:48.629534 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:49.129296 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:49.129375 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:49.129725 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:49.629524 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:49.629595 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:49.629882 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:50.129670 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:50.129763 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:50.130141 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:50.628871 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:50.628953 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:50.629283 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:51.129015 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:51.129090 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:51.129476 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:51.129545 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:51.629293 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:51.629378 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:51.629669 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:52.129450 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:52.129528 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:52.129859 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:52.629654 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:52.629726 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:52.630058 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:53.128778 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:53.128856 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:53.129197 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:53.628936 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:53.629015 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:53.629344 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:53.629420 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:54.129104 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:54.129196 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:54.129579 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:54.629357 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:54.629426 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:54.629721 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:55.129436 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:55.129536 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:55.129882 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:55.629646 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:55.629719 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:55.630035 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:55.630107 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:56.128773 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:56.128845 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:56.129181 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:56.628950 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:56.629034 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:56.629378 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:57.129105 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:57.129181 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:57.129559 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:57.629369 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:57.629438 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:57.629742 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:58.129515 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:58.129595 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:58.129950 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:00:58.130034 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:00:58.628750 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:58.628830 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:58.629147 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:59.128851 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:59.128928 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:59.129309 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:00:59.629042 1653676 type.go:168] "Request Body" body=""
	I0804 09:00:59.629121 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:00:59.629455 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:00.129167 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:00.129270 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:00.129632 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:00.629423 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:00.629498 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:00.629793 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:00.629863 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:01.129591 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:01.129676 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:01.130023 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:01.628726 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:01.628804 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:01.629104 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:02.128841 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:02.128936 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:02.129299 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:02.629029 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:02.629126 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:02.629455 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:03.129199 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:03.129305 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:03.129646 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:03.129706 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:03.629451 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:03.629523 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:03.629841 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:04.129677 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:04.129766 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:04.130114 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:04.628842 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:04.628925 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:04.629305 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:05.129074 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:05.129179 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:05.129561 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:05.629356 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:05.629434 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:05.629760 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:05.629824 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:06.129613 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:06.129693 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:06.130038 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:06.628772 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:06.628866 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:06.629198 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:07.128967 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:07.129056 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:07.129446 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:07.629172 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:07.629271 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:07.629622 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:08.129431 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:08.129524 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:08.129883 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:08.129948 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:08.629670 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:08.629754 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:08.630071 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:09.128820 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:09.128899 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:09.129287 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:09.629017 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:09.629101 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:09.629445 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:10.129193 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:10.129297 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:10.129649 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:10.629427 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:10.629501 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:10.629814 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:10.629890 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:11.129612 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:11.129692 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:11.129995 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:11.628703 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:11.628780 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:11.629047 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:12.128784 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:12.128867 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:12.129223 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:12.628955 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:12.629067 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:12.629416 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:13.129129 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:13.129206 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:13.129596 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:13.129670 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:13.629350 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:13.629433 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:13.629735 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:14.129533 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:14.129618 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:14.129952 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:14.628687 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:14.628782 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:14.629096 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:15.128811 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:15.128888 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:15.129232 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:15.628958 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:15.629043 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:15.629372 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:15.629444 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:16.129169 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:16.129269 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:16.129671 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:16.629474 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:16.629546 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:16.629863 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:17.129648 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:17.129733 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:17.130077 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:17.628801 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:17.628873 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:17.629169 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:18.128883 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:18.128963 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:18.129324 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:18.129398 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:18.629048 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:18.629135 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:18.629454 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:19.129179 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:19.129268 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:19.129621 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:19.629351 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:19.629424 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:19.629708 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:20.129508 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:20.129585 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:20.129925 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:20.129994 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:20.628667 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:20.628737 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:20.629038 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:21.128739 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:21.128822 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:21.129169 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:21.628882 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:21.628954 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:21.629266 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:22.128994 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:22.129070 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:22.129426 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:22.629135 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:22.629221 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:22.629538 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:22.629601 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:23.129384 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:23.129466 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:23.129808 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:23.629595 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:23.629669 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:23.629984 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:24.128733 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:24.128814 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:24.129170 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:24.629511 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:24.629630 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:24.630004 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:24.630069 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:25.128773 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:25.128859 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:25.129232 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:25.629077 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:25.629159 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:25.629492 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:26.129299 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:26.129377 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:26.129704 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:26.629492 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:26.629562 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:26.629872 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:27.129668 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:27.129753 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:27.130132 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W0804 09:01:27.130203 1653676 node_ready.go:55] error getting node "functional-699837" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-699837": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:01:27.628888 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:27.628961 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:27.629299 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:28.129030 1653676 type.go:168] "Request Body" body=""
	I0804 09:01:28.129106 1653676 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-699837" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I0804 09:01:28.129492 1653676 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I0804 09:01:28.629210 1653676 node_ready.go:38] duration metric: took 6m0.000644351s for node "functional-699837" to be "Ready" ...
	I0804 09:01:28.630996 1653676 out.go:201] 
	W0804 09:01:28.631963 1653676 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W0804 09:01:28.631975 1653676 out.go:270] * 
	W0804 09:01:28.633557 1653676 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 09:01:28.634655 1653676 out.go:201] 
	
	
	==> Docker <==
	Aug 04 08:55:25 functional-699837 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Aug 04 08:55:25 functional-699837 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Aug 04 08:55:25 functional-699837 systemd[1]: cri-docker.service: Deactivated successfully.
	Aug 04 08:55:25 functional-699837 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Aug 04 08:55:25 functional-699837 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Start docker client with request timeout 0s"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Loaded network plugin cni"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Docker cri networking managed by network plugin cni"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Setting cgroupDriver cgroupfs"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Aug 04 08:55:25 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:25Z" level=info msg="Start cri-dockerd grpc backend"
	Aug 04 08:55:25 functional-699837 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Aug 04 08:55:26 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a670d9d90ef4b3f9c8a2229b07375783d2742e14cb8b08de1d1d609352b31ca9/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 08:55:26 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6196286ba923f262b934ea01e1a6c54ba05e38908d2ce0251696c08a8b6e4e4f/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 08:55:26 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/87c98d51b11aa2b27ab051d1a1e76c991403967dc4bbed5c8865a1c8839a006c/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 08:55:26 functional-699837 cri-dockerd[7496]: time="2025-08-04T08:55:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4dc39892c792c69f93a9689deb4a22058aa932aaab9b5a2ef60fe93066740a6a/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 08:56:16 functional-699837 dockerd[7186]: time="2025-08-04T08:56:16.274092329Z" level=info msg="ignoring event" container=6a82f093dfdcc77dca8bafe4751718938b424c4cd13715b8c25f8c91d4094c87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 08:56:25 functional-699837 dockerd[7186]: time="2025-08-04T08:56:25.952124711Z" level=info msg="ignoring event" container=d11d953e110f7fac9239023c8f301d3ea182fcc19934837d8f119e7d945ae14a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 08:56:55 functional-699837 dockerd[7186]: time="2025-08-04T08:56:55.721506604Z" level=info msg="ignoring event" container=340fbe431c80ae67951d4d3de5dbda3a7af1fd7b79b5e3706e0b82c0e360bf2b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 08:59:24 functional-699837 dockerd[7186]: time="2025-08-04T08:59:24.457189004Z" level=info msg="ignoring event" container=a70a68ec61693decabdce1681f5a849ba6740bf7abf9db4339c54ccb1b99a359 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 08:59:32 functional-699837 dockerd[7186]: time="2025-08-04T08:59:32.204638673Z" level=info msg="ignoring event" container=2fafac7520c8d0e9a9ddb8e73ffb49294146ab4a5f8bce024822ab9f4fdcd5bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2fafac7520c8d       9ad783615e1bc       2 minutes ago       Exited              kube-controller-manager   6                   87c98d51b11aa       kube-controller-manager-functional-699837
	a70a68ec61693       d85eea91cc41d       2 minutes ago       Exited              kube-apiserver            6                   6196286ba923f       kube-apiserver-functional-699837
	340fbe431c80a       1e30c0b1e9b99       4 minutes ago       Exited              etcd                      6                   a670d9d90ef4b       etcd-functional-699837
	3206d43d6e58f       21d34a2aeacf5       6 minutes ago       Running             kube-scheduler            2                   4dc39892c792c       kube-scheduler-functional-699837
	0cb03d71b984f       21d34a2aeacf5       6 minutes ago       Exited              kube-scheduler            1                   cdae8372eae9d       kube-scheduler-functional-699837
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:01:41.338737   10337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:01:41.339288   10337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:01:41.340840   10337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:01:41.341300   10337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:01:41.342796   10337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000488] IPv4: martian source 10.244.0.33 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[  +0.000590] IPv4: martian source 10.244.0.33 from 10.244.0.7, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ee 17 d6 72 58 d4 08 06
	[ +20.425373] IPv4: martian source 10.244.0.36 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 2e 04 ae c5 a3 08 06
	[  +0.708699] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[Aug 4 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 4d a6 d6 4c 9f 08 06
	[Aug 4 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 38 7f 58 31 63 08 06
	[ +30.193533] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 b7 61 9c 47 84 08 06
	[Aug 4 08:45] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a d0 26 e8 7c d1 08 06
	[Aug 4 08:46] FS-Cache: Duplicate cookie detected
	[  +0.004807] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006832] FS-Cache: O-cookie d=000000003739c6e4{9P.session} n=000000001b482ea5
	[  +0.007607] FS-Cache: O-key=[10] '34333332323039333239'
	[  +0.005436] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006682] FS-Cache: N-cookie d=000000003739c6e4{9P.session} n=00000000e0b3994b
	[  +0.007609] FS-Cache: N-key=[10] '34333332323039333239'
	[  +5.882110] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 55 4a ac 47 cd 08 06
	
	
	==> etcd [340fbe431c80] <==
	flag provided but not defined: -proxy-refresh-interval
	Usage:
	
	  etcd [flags]
	    Start an etcd server.
	
	  etcd --version
	    Show the version of etcd.
	
	  etcd -h | --help
	    Show the help information about etcd.
	
	  etcd --config-file
	    Path to the server configuration file. Note that if a configuration file is provided, other command line flags and environment variables will be ignored.
	
	  etcd gateway
	    Run the stateless pass-through etcd TCP connection forwarding proxy.
	
	  etcd grpc-proxy
	    Run the stateless etcd v3 gRPC L7 reverse proxy.
	
	
	
	==> kernel <==
	 09:01:41 up 1 day, 17:43,  0 users,  load average: 0.38, 0.14, 0.36
	Linux functional-699837 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [a70a68ec6169] <==
	W0804 08:59:04.426148       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:04.426280       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 08:59:04.427463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 08:59:04.434192       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 08:59:04.440592       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 08:59:04.440613       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 08:59:04.440846       1 instance.go:232] Using reconciler: lease
	W0804 08:59:04.441668       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:04.441684       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:05.427410       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:05.427410       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:05.441981       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:07.008411       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:07.025679       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:07.166787       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:09.765027       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:09.806488       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:10.063522       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:13.932343       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:14.037582       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:14.089064       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:19.259004       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:19.470708       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 08:59:20.945736       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 08:59:24.442401       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [2fafac7520c8] <==
	I0804 08:59:11.887703       1 serving.go:386] Generated self-signed cert in-memory
	I0804 08:59:12.166874       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 08:59:12.166898       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 08:59:12.168293       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 08:59:12.168315       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 08:59:12.168600       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 08:59:12.168727       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 08:59:32.171192       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-scheduler [0cb03d71b984] <==
	
	
	==> kube-scheduler [3206d43d6e58] <==
	E0804 09:00:28.563885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:00:32.014424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:00:33.033677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 09:00:47.281529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:00:47.653383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 09:00:48.988484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 09:00:54.836226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 09:00:54.975251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:00:57.394600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:00:59.500812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:01:00.013055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 09:01:00.539902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:01:01.692270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 09:01:02.088398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:01:08.204402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 09:01:09.352314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:01:11.128294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:01:23.683836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:01:24.236788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 09:01:31.276535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:01:35.817387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:01:38.102719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 09:01:38.258043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:01:39.576625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:01:41.440686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	
	
	==> kubelet <==
	Aug 04 09:01:23 functional-699837 kubelet[4226]: E0804 09:01:23.481137    4226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:01:24 functional-699837 kubelet[4226]: E0804 09:01:24.396607    4226 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588443569dee4d  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 08:51:19.611674189 +0000 UTC m=+0.322961923,LastTimestamp:2025-08-04 08:51:19.611674189 +0000 UTC m=+0.322961923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:01:24 functional-699837 kubelet[4226]: E0804 09:01:24.466107    4226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:01:27 functional-699837 kubelet[4226]: E0804 09:01:27.706024    4226 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Aug 04 09:01:27 functional-699837 kubelet[4226]: E0804 09:01:27.936556    4226 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Aug 04 09:01:28 functional-699837 kubelet[4226]: E0804 09:01:28.598604    4226 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:01:29 functional-699837 kubelet[4226]: E0804 09:01:29.657833    4226 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	Aug 04 09:01:30 functional-699837 kubelet[4226]: I0804 09:01:30.482479    4226 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:01:30 functional-699837 kubelet[4226]: E0804 09:01:30.482883    4226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:01:31 functional-699837 kubelet[4226]: E0804 09:01:31.467464    4226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:01:31 functional-699837 kubelet[4226]: E0804 09:01:31.599251    4226 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:01:31 functional-699837 kubelet[4226]: I0804 09:01:31.599334    4226 scope.go:117] "RemoveContainer" containerID="2fafac7520c8d0e9a9ddb8e73ffb49294146ab4a5f8bce024822ab9f4fdcd5bd"
	Aug 04 09:01:31 functional-699837 kubelet[4226]: E0804 09:01:31.599476    4226 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-699837_kube-system(ed0b2fd0bf6ad62500e8494ab79d1a1a)\"" pod="kube-system/kube-controller-manager-functional-699837" podUID="ed0b2fd0bf6ad62500e8494ab79d1a1a"
	Aug 04 09:01:33 functional-699837 kubelet[4226]: E0804 09:01:33.392410    4226 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-699837&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Aug 04 09:01:34 functional-699837 kubelet[4226]: E0804 09:01:34.397801    4226 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588443569dee4d  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 08:51:19.611674189 +0000 UTC m=+0.322961923,LastTimestamp:2025-08-04 08:51:19.611674189 +0000 UTC m=+0.322961923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:01:36 functional-699837 kubelet[4226]: E0804 09:01:36.599152    4226 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:01:36 functional-699837 kubelet[4226]: I0804 09:01:36.599236    4226 scope.go:117] "RemoveContainer" containerID="340fbe431c80ae67951d4d3de5dbda3a7af1fd7b79b5e3706e0b82c0e360bf2b"
	Aug 04 09:01:36 functional-699837 kubelet[4226]: E0804 09:01:36.599395    4226 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=etcd pod=etcd-functional-699837_kube-system(33b890b5c0b95f8eaa124c566a17ff33)\"" pod="kube-system/etcd-functional-699837" podUID="33b890b5c0b95f8eaa124c566a17ff33"
	Aug 04 09:01:37 functional-699837 kubelet[4226]: I0804 09:01:37.484522    4226 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:01:37 functional-699837 kubelet[4226]: E0804 09:01:37.484947    4226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:01:37 functional-699837 kubelet[4226]: E0804 09:01:37.599579    4226 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:01:37 functional-699837 kubelet[4226]: I0804 09:01:37.599670    4226 scope.go:117] "RemoveContainer" containerID="a70a68ec61693decabdce1681f5a849ba6740bf7abf9db4339c54ccb1b99a359"
	Aug 04 09:01:37 functional-699837 kubelet[4226]: E0804 09:01:37.599814    4226 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-functional-699837_kube-system(2b39e4280fdde7528fa65c33493b517b)\"" pod="kube-system/kube-apiserver-functional-699837" podUID="2b39e4280fdde7528fa65c33493b517b"
	Aug 04 09:01:38 functional-699837 kubelet[4226]: E0804 09:01:38.468185    4226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:01:39 functional-699837 kubelet[4226]: E0804 09:01:39.658876    4226 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837: exit status 2 (266.655945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-699837" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/MinikubeKubectlCmdDirectly (1.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/ExtraConfig (742.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-699837 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0804 09:04:03.491849 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:05:41.681495 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:07:04.751110 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:09:03.491427 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:10:41.685010 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:774: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-699837 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (12m21.34116639s)

                                                
                                                
-- stdout --
	* [functional-699837] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-699837" primary control-plane node in "functional-699837" cluster
	* Pulling base image v0.0.47-1753871403-21198 ...
	* Updating the running docker "functional-699837" container ...
	* Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	  - Generating certificates and keys ...  - Booting up control plane ...  - Generating certificates and keys ...  - Booting up control plane ...
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001870906s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.652491519s
	[control-plane-check] kube-scheduler is healthy after 32.64974442s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000445769s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001670583s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 3.44333807s
	[control-plane-check] kube-scheduler is healthy after 33.829135405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000246349s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001670583s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 3.44333807s
	[control-plane-check] kube-scheduler is healthy after 33.829135405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000246349s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:776: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-699837 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:778: restart took 12m21.342653252s for "functional-699837" cluster.
I0804 09:14:03.455721 1582690 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-699837
helpers_test.go:235: (dbg) docker inspect functional-699837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	        "Created": "2025-08-04T08:46:45.45274172Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1645232,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T08:46:45.480784715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hosts",
	        "LogPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef-json.log",
	        "Name": "/functional-699837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-699837:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-699837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	                "LowerDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/merged",
	                "UpperDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/diff",
	                "WorkDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-699837",
	                "Source": "/var/lib/docker/volumes/functional-699837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-699837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-699837",
	                "name.minikube.sigs.k8s.io": "functional-699837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "28a81d3856c88da8c1d30d5c1cccd74ba2a899c3397b78caf0ac9da484142038",
	            "SandboxKey": "/var/run/docker/netns/28a81d3856c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-699837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:c5:9a:18:f2:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "763070d9e7bba0803db69bf71eb608d56921d0bfd4c71a1d39d0701f7372b87c",
	                    "EndpointID": "83493e8c17b59326d8c479c2c0d7a5ded2cae3362a881c1ce8347b3f751ead15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-699837",
	                        "c369b96e23d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837
E0804 09:14:03.491448 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837: exit status 2 (264.022972ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 logs -n 25
helpers_test.go:252: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-114794 image ls --format yaml --alsologtostderr                                                                                          │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ ssh     │ functional-114794 ssh pgrep buildkitd                                                                                                               │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ image   │ functional-114794 image build -t localhost/my-image:functional-114794 testdata/build --alsologtostderr                                              │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image   │ functional-114794 image ls --format json --alsologtostderr                                                                                          │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image   │ functional-114794 image ls --format table --alsologtostderr                                                                                         │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image   │ functional-114794 image ls                                                                                                                          │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ delete  │ -p functional-114794                                                                                                                                │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ start   │ -p functional-699837 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ start   │ -p functional-699837 --alsologtostderr -v=8                                                                                                         │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 08:55 UTC │                     │
	│ cache   │ functional-699837 cache add registry.k8s.io/pause:3.1                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ functional-699837 cache add registry.k8s.io/pause:3.3                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ functional-699837 cache add registry.k8s.io/pause:latest                                                                                            │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ functional-699837 cache add minikube-local-cache-test:functional-699837                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ functional-699837 cache delete minikube-local-cache-test:functional-699837                                                                          │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                    │ minikube          │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ list                                                                                                                                                │ minikube          │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ ssh     │ functional-699837 ssh sudo crictl images                                                                                                            │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ ssh     │ functional-699837 ssh sudo docker rmi registry.k8s.io/pause:latest                                                                                  │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ ssh     │ functional-699837 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │                     │
	│ cache   │ functional-699837 cache reload                                                                                                                      │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ ssh     │ functional-699837 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                    │ minikube          │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                 │ minikube          │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ kubectl │ functional-699837 kubectl -- --context functional-699837 get pods                                                                                   │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │                     │
	│ start   │ -p functional-699837 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                            │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 09:01:42
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 09:01:42.156481 1661480 out.go:345] Setting OutFile to fd 1 ...
	I0804 09:01:42.156707 1661480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:01:42.156710 1661480 out.go:358] Setting ErrFile to fd 2...
	I0804 09:01:42.156714 1661480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:01:42.156897 1661480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 09:01:42.157507 1661480 out.go:352] Setting JSON to false
	I0804 09:01:42.158437 1661480 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":150191,"bootTime":1754147911,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 09:01:42.158562 1661480 start.go:140] virtualization: kvm guest
	I0804 09:01:42.160356 1661480 out.go:177] * [functional-699837] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 09:01:42.161427 1661480 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 09:01:42.161472 1661480 notify.go:220] Checking for updates...
	I0804 09:01:42.163278 1661480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 09:01:42.164206 1661480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 09:01:42.165120 1661480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 09:01:42.165996 1661480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 09:01:42.166919 1661480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 09:01:42.168183 1661480 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:01:42.168274 1661480 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 09:01:42.191254 1661480 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 09:01:42.191357 1661480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:01:42.241393 1661480 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:56 SystemTime:2025-08-04 09:01:42.232515248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:01:42.241500 1661480 docker.go:318] overlay module found
	I0804 09:01:42.242889 1661480 out.go:177] * Using the docker driver based on existing profile
	I0804 09:01:42.244074 1661480 start.go:304] selected driver: docker
	I0804 09:01:42.244080 1661480 start.go:918] validating driver "docker" against &{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:01:42.244146 1661480 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 09:01:42.244220 1661480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:01:42.294650 1661480 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:56 SystemTime:2025-08-04 09:01:42.286637693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:01:42.295228 1661480 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 09:01:42.295248 1661480 cni.go:84] Creating CNI manager for ""
	I0804 09:01:42.295307 1661480 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:01:42.295353 1661480 start.go:348] cluster config:
	{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:01:42.296893 1661480 out.go:177] * Starting "functional-699837" primary control-plane node in "functional-699837" cluster
	I0804 09:01:42.297909 1661480 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 09:01:42.298895 1661480 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 09:01:42.299795 1661480 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 09:01:42.299827 1661480 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 09:01:42.299834 1661480 cache.go:56] Caching tarball of preloaded images
	I0804 09:01:42.299892 1661480 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 09:01:42.299912 1661480 preload.go:172] Found /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 09:01:42.299918 1661480 cache.go:59] Finished verifying existence of preloaded tar for v1.34.0-beta.0 on docker
	I0804 09:01:42.300000 1661480 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/config.json ...
	I0804 09:01:42.318895 1661480 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 09:01:42.318906 1661480 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 09:01:42.318921 1661480 cache.go:230] Successfully downloaded all kic artifacts
	I0804 09:01:42.318949 1661480 start.go:360] acquireMachinesLock for functional-699837: {Name:mkeddb8e244284f14cfc07327f464823de65cf67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 09:01:42.319013 1661480 start.go:364] duration metric: took 47.797µs to acquireMachinesLock for "functional-699837"
	I0804 09:01:42.319031 1661480 start.go:96] Skipping create...Using existing machine configuration
	I0804 09:01:42.319035 1661480 fix.go:54] fixHost starting: 
	I0804 09:01:42.319241 1661480 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 09:01:42.335260 1661480 fix.go:112] recreateIfNeeded on functional-699837: state=Running err=<nil>
	W0804 09:01:42.335277 1661480 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 09:01:42.336775 1661480 out.go:177] * Updating the running docker "functional-699837" container ...
	I0804 09:01:42.337763 1661480 machine.go:93] provisionDockerMachine start ...
	I0804 09:01:42.337866 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:42.354303 1661480 main.go:141] libmachine: Using SSH client type: native
	I0804 09:01:42.354606 1661480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 09:01:42.354616 1661480 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 09:01:42.480475 1661480 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-699837
	
	I0804 09:01:42.480497 1661480 ubuntu.go:169] provisioning hostname "functional-699837"
	I0804 09:01:42.480554 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:42.497934 1661480 main.go:141] libmachine: Using SSH client type: native
	I0804 09:01:42.498143 1661480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 09:01:42.498149 1661480 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-699837 && echo "functional-699837" | sudo tee /etc/hostname
	I0804 09:01:42.631472 1661480 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-699837
	
	I0804 09:01:42.631543 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:42.651771 1661480 main.go:141] libmachine: Using SSH client type: native
	I0804 09:01:42.651968 1661480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 09:01:42.651979 1661480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-699837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-699837/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-699837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 09:01:42.773172 1661480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 09:01:42.773193 1661480 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 09:01:42.773212 1661480 ubuntu.go:177] setting up certificates
	I0804 09:01:42.773223 1661480 provision.go:84] configureAuth start
	I0804 09:01:42.773312 1661480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-699837
	I0804 09:01:42.791415 1661480 provision.go:143] copyHostCerts
	I0804 09:01:42.791465 1661480 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 09:01:42.791472 1661480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 09:01:42.791531 1661480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 09:01:42.791616 1661480 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 09:01:42.791620 1661480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 09:01:42.791646 1661480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 09:01:42.791714 1661480 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 09:01:42.791716 1661480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 09:01:42.791734 1661480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 09:01:42.791789 1661480 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.functional-699837 san=[127.0.0.1 192.168.49.2 functional-699837 localhost minikube]
	I0804 09:01:43.143340 1661480 provision.go:177] copyRemoteCerts
	I0804 09:01:43.143389 1661480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 09:01:43.143445 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:43.161220 1661480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 09:01:43.249861 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 09:01:43.271347 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 09:01:43.292377 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 09:01:43.313416 1661480 provision.go:87] duration metric: took 540.180755ms to configureAuth
	I0804 09:01:43.313435 1661480 ubuntu.go:193] setting minikube options for container-runtime
	I0804 09:01:43.313593 1661480 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:01:43.313633 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:43.330273 1661480 main.go:141] libmachine: Using SSH client type: native
	I0804 09:01:43.330483 1661480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 09:01:43.330489 1661480 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 09:01:43.457453 1661480 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 09:01:43.457467 1661480 ubuntu.go:71] root file system type: overlay
	I0804 09:01:43.457576 1661480 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 09:01:43.457634 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:43.474934 1661480 main.go:141] libmachine: Using SSH client type: native
	I0804 09:01:43.475149 1661480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 09:01:43.475211 1661480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 09:01:43.609712 1661480 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 09:01:43.609798 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:43.627690 1661480 main.go:141] libmachine: Using SSH client type: native
	I0804 09:01:43.627960 1661480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 09:01:43.627979 1661480 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 09:01:43.753925 1661480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 09:01:43.753943 1661480 machine.go:96] duration metric: took 1.416170869s to provisionDockerMachine
	I0804 09:01:43.753958 1661480 start.go:293] postStartSetup for "functional-699837" (driver="docker")
	I0804 09:01:43.753972 1661480 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 09:01:43.754026 1661480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 09:01:43.754070 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:43.771133 1661480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 09:01:43.861861 1661480 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 09:01:43.864855 1661480 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 09:01:43.864888 1661480 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 09:01:43.864895 1661480 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 09:01:43.864901 1661480 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 09:01:43.864911 1661480 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 09:01:43.864956 1661480 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 09:01:43.865026 1661480 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 09:01:43.865096 1661480 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts -> hosts in /etc/test/nested/copy/1582690
	I0804 09:01:43.865126 1661480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1582690
	I0804 09:01:43.872832 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 09:01:43.894143 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts --> /etc/test/nested/copy/1582690/hosts (40 bytes)
	I0804 09:01:43.915287 1661480 start.go:296] duration metric: took 161.311477ms for postStartSetup
	I0804 09:01:43.915357 1661480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 09:01:43.915392 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:43.932959 1661480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 09:01:44.018261 1661480 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 09:01:44.022893 1661480 fix.go:56] duration metric: took 1.703852119s for fixHost
	I0804 09:01:44.022909 1661480 start.go:83] releasing machines lock for "functional-699837", held for 1.703889075s
	I0804 09:01:44.022981 1661480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-699837
	I0804 09:01:44.039826 1661480 ssh_runner.go:195] Run: cat /version.json
	I0804 09:01:44.039861 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:44.039893 1661480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 09:01:44.039958 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:44.056968 1661480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 09:01:44.057018 1661480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 09:01:44.215860 1661480 ssh_runner.go:195] Run: systemctl --version
	I0804 09:01:44.220163 1661480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 09:01:44.224284 1661480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 09:01:44.241133 1661480 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 09:01:44.241191 1661480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 09:01:44.249056 1661480 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 09:01:44.249074 1661480 start.go:495] detecting cgroup driver to use...
	I0804 09:01:44.249111 1661480 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 09:01:44.249262 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 09:01:44.263581 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:44.682033 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 09:01:44.691892 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 09:01:44.700781 1661480 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 09:01:44.700830 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 09:01:44.709728 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 09:01:44.718687 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 09:01:44.727121 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 09:01:44.735358 1661480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 09:01:44.743204 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 09:01:44.751683 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 09:01:44.760146 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 09:01:44.768590 1661480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 09:01:44.775769 1661480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 09:01:44.782939 1661480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:01:44.861305 1661480 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 09:01:45.079189 1661480 start.go:495] detecting cgroup driver to use...
	I0804 09:01:45.079234 1661480 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 09:01:45.079293 1661480 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 09:01:45.091099 1661480 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 09:01:45.091152 1661480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 09:01:45.102759 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 09:01:45.118200 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:45.531236 1661480 ssh_runner.go:195] Run: which cri-dockerd
	I0804 09:01:45.535092 1661480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 09:01:45.543037 1661480 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 09:01:45.558759 1661480 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 09:01:45.636615 1661480 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 09:01:45.710742 1661480 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 09:01:45.710843 1661480 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 09:01:45.726627 1661480 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 09:01:45.735943 1661480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:01:45.815264 1661480 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 09:01:46.120565 1661480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 09:01:46.133038 1661480 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0804 09:01:46.150796 1661480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 09:01:46.160527 1661480 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 09:01:46.221390 1661480 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 09:01:46.295075 1661480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:01:46.370922 1661480 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 09:01:46.383433 1661480 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 09:01:46.393933 1661480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:01:46.488903 1661480 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 09:01:46.549986 1661480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 09:01:46.560540 1661480 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 09:01:46.560600 1661480 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 09:01:46.563751 1661480 start.go:563] Will wait 60s for crictl version
	I0804 09:01:46.563795 1661480 ssh_runner.go:195] Run: which crictl
	I0804 09:01:46.566758 1661480 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 09:01:46.597980 1661480 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 09:01:46.598027 1661480 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 09:01:46.620697 1661480 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 09:01:46.645762 1661480 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 09:01:46.645842 1661480 cli_runner.go:164] Run: docker network inspect functional-699837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 09:01:46.662809 1661480 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0804 09:01:46.668020 1661480 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0804 09:01:46.668935 1661480 kubeadm.go:875] updating cluster {Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 09:01:46.669097 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:47.081840 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:47.467578 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:47.872001 1661480 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 09:01:47.872135 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:48.275938 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:48.676410 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:49.085653 1661480 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 09:01:49.106101 1661480 docker.go:703] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-699837
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0804 09:01:49.106124 1661480 docker.go:633] Images already preloaded, skipping extraction
	I0804 09:01:49.106192 1661480 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 09:01:49.124259 1661480 docker.go:703] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-699837
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0804 09:01:49.124275 1661480 cache_images.go:85] Images are preloaded, skipping loading
	I0804 09:01:49.124286 1661480 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0-beta.0 docker true true} ...
	I0804 09:01:49.124427 1661480 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-699837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 09:01:49.124491 1661480 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 09:01:49.170617 1661480 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0804 09:01:49.170646 1661480 cni.go:84] Creating CNI manager for ""
	I0804 09:01:49.170660 1661480 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:01:49.170668 1661480 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 09:01:49.170688 1661480 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-699837 NodeName:functional-699837 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 09:01:49.170805 1661480 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-699837"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 09:01:49.170853 1661480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 09:01:49.178893 1661480 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 09:01:49.178936 1661480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 09:01:49.186387 1661480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0804 09:01:49.201786 1661480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 09:01:49.217510 1661480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0804 09:01:49.233089 1661480 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0804 09:01:49.236403 1661480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:01:49.323526 1661480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 09:01:49.333766 1661480 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837 for IP: 192.168.49.2
	I0804 09:01:49.333778 1661480 certs.go:194] generating shared ca certs ...
	I0804 09:01:49.333793 1661480 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:01:49.333937 1661480 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 09:01:49.333980 1661480 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 09:01:49.333986 1661480 certs.go:256] generating profile certs ...
	I0804 09:01:49.334070 1661480 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.key
	I0804 09:01:49.334108 1661480 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key.5971bdc2
	I0804 09:01:49.334140 1661480 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key
	I0804 09:01:49.334230 1661480 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 09:01:49.334251 1661480 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 09:01:49.334257 1661480 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 09:01:49.334275 1661480 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 09:01:49.334296 1661480 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 09:01:49.334317 1661480 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 09:01:49.334351 1661480 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 09:01:49.334909 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 09:01:49.355952 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 09:01:49.376603 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 09:01:49.397019 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 09:01:49.417530 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 09:01:49.437950 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 09:01:49.457994 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 09:01:49.478390 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 09:01:49.498988 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 09:01:49.519691 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 09:01:49.540289 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 09:01:49.560954 1661480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 09:01:49.576254 1661480 ssh_runner.go:195] Run: openssl version
	I0804 09:01:49.581261 1661480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 09:01:49.589514 1661480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:01:49.592478 1661480 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:01:49.592512 1661480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:01:49.598570 1661480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 09:01:49.606091 1661480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 09:01:49.613958 1661480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 09:01:49.616884 1661480 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 09:01:49.616913 1661480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 09:01:49.622974 1661480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 09:01:49.630466 1661480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 09:01:49.638717 1661480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 09:01:49.641763 1661480 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 09:01:49.641800 1661480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 09:01:49.648809 1661480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 09:01:49.656437 1661480 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 09:01:49.659644 1661480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 09:01:49.665529 1661480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 09:01:49.671334 1661480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 09:01:49.677030 1661480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 09:01:49.682628 1661480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 09:01:49.688419 1661480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 09:01:49.694068 1661480 kubeadm.go:392] StartCluster: {Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:01:49.694169 1661480 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 09:01:49.711391 1661480 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 09:01:49.719062 1661480 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 09:01:49.719070 1661480 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0804 09:01:49.719111 1661480 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 09:01:49.726688 1661480 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 09:01:49.727133 1661480 kubeconfig.go:125] found "functional-699837" server: "https://192.168.49.2:8441"
	I0804 09:01:49.728393 1661480 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 09:01:49.735849 1661480 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-08-04 08:47:09.659345836 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-08-04 09:01:49.228640689 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I0804 09:01:49.735860 1661480 kubeadm.go:1152] stopping kube-system containers ...
	I0804 09:01:49.735896 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 09:01:49.755611 1661480 docker.go:496] Stopping containers: [54bef897d3ad 5e988e8b274a 16527e0d8c26 14c7dc479dba 243f1d3d8950 2fafac7520c8 a70a68ec6169 340fbe431c80 3206d43d6e58 6196286ba923 87c98d51b11a 4dc39892c792 a670d9d90ef4 0cb03d71b984 cdae8372eae9]
	I0804 09:01:49.755668 1661480 ssh_runner.go:195] Run: docker stop 54bef897d3ad 5e988e8b274a 16527e0d8c26 14c7dc479dba 243f1d3d8950 2fafac7520c8 a70a68ec6169 340fbe431c80 3206d43d6e58 6196286ba923 87c98d51b11a 4dc39892c792 a670d9d90ef4 0cb03d71b984 cdae8372eae9
	I0804 09:01:49.833087 1661480 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 09:01:49.988574 1661480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 09:01:49.996961 1661480 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Aug  4 08:51 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5628 Aug  4 08:51 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Aug  4 08:51 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Aug  4 08:51 /etc/kubernetes/scheduler.conf
	
	I0804 09:01:49.996998 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0804 09:01:50.004698 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0804 09:01:50.012067 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0804 09:01:50.012114 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 09:01:50.019467 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0804 09:01:50.027050 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0804 09:01:50.027082 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 09:01:50.034408 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0804 09:01:50.041768 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0804 09:01:50.041795 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 09:01:50.049038 1661480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 09:01:50.056613 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:01:50.095874 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:01:52.185164 1661480 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.089256416s)
	I0804 09:01:52.185190 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:01:52.321482 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:01:52.369615 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:01:52.486402 1661480 api_server.go:52] waiting for apiserver process to appear ...
	I0804 09:01:52.486480 1661480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:01:52.986660 1661480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:01:53.487520 1661480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:01:53.499325 1661480 api_server.go:72] duration metric: took 1.012937004s to wait for apiserver process to appear ...
	I0804 09:01:53.499341 1661480 api_server.go:88] waiting for apiserver healthz status ...
	I0804 09:01:53.499366 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:01:58.500087 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:01:58.500130 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:03.500427 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:03.500461 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:08.502025 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:08.502061 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:13.503279 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:13.503317 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:14.779567 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": read tcp 192.168.49.1:33220->192.168.49.2:8441: read: connection reset by peer
	I0804 09:02:14.779627 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:14.780024 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:15.000448 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:15.000951 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:15.499579 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:15.499998 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:15.999661 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:21.000340 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:21.000373 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:26.001332 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:26.001368 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:31.002000 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:31.002033 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:36.005328 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:36.005357 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:36.551344 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": read tcp 192.168.49.1:35998->192.168.49.2:8441: read: connection reset by peer
	I0804 09:02:36.551397 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:36.551841 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:36.999411 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:36.999848 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:37.500408 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:37.500946 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:37.999558 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:37.999957 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:38.499584 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:38.500029 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:38.999644 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:39.000099 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:39.499738 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:39.500213 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:39.999937 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:40.000357 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:40.500064 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:40.500521 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:40.999940 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:41.000330 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:41.500057 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:41.500511 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:42.000224 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:42.000633 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:42.500342 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:42.500765 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:43.000455 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:43.000936 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:43.499548 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:43.499961 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:43.999579 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:43.999966 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:44.499598 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:44.500010 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:44.999630 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:45.000087 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:45.499708 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:45.500143 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:45.999756 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:46.000186 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:46.499807 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:46.500248 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:46.999865 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:47.000330 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:47.500068 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:47.500472 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:48.000163 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:48.000618 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:48.500337 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:48.500730 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:49.000434 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:49.000869 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:49.499503 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:49.499937 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:49.999501 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:49.999940 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:50.499602 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:50.500057 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:50.999688 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:51.000139 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:51.499774 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:51.500227 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:51.999865 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:52.000295 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:52.500025 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:52.500526 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:53.000242 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:53.000634 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:53.500441 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:02:53.519729 1661480 logs.go:282] 2 containers: [535dc83f2f73 a70a68ec6169]
	I0804 09:02:53.519801 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:02:53.538762 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:02:53.538813 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:02:53.556054 1661480 logs.go:282] 0 containers: []
	W0804 09:02:53.556070 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:02:53.556116 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:02:53.573504 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:02:53.573556 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:02:53.590727 1661480 logs.go:282] 0 containers: []
	W0804 09:02:53.590742 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:02:53.590784 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:02:53.608494 1661480 logs.go:282] 1 containers: [0bd5610c8547]
	I0804 09:02:53.608550 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:02:53.625413 1661480 logs.go:282] 0 containers: []
	W0804 09:02:53.625424 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:02:53.625435 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:02:53.625443 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:02:53.665235 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:02:53.665279 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:02:53.683621 1661480 logs.go:123] Gathering logs for kube-apiserver [535dc83f2f73] ...
	I0804 09:02:53.683636 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 535dc83f2f73"
	I0804 09:02:53.708748 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:02:53.708766 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:02:53.729347 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:02:53.729362 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:02:53.770407 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:02:53.770428 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:02:53.852664 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:02:53.852687 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:02:53.907229 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:02:53.900372   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.900835   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.902406   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.902856   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.904351   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:02:53.900372   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.900835   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.902406   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.902856   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.904351   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:02:53.907253 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:02:53.907266 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:02:53.932272 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:02:53.932289 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:02:53.966223 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:02:53.966245 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:02:54.018841 1661480 logs.go:123] Gathering logs for kube-controller-manager [0bd5610c8547] ...
	I0804 09:02:54.018859 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd5610c8547"
	I0804 09:02:56.541137 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:56.541605 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:56.541686 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:02:56.560651 1661480 logs.go:282] 2 containers: [535dc83f2f73 a70a68ec6169]
	I0804 09:02:56.560710 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:02:56.578753 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:02:56.578815 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:02:56.596005 1661480 logs.go:282] 0 containers: []
	W0804 09:02:56.596019 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:02:56.596059 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:02:56.613187 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:02:56.613269 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:02:56.629991 1661480 logs.go:282] 0 containers: []
	W0804 09:02:56.630005 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:02:56.630051 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:02:56.647935 1661480 logs.go:282] 1 containers: [0bd5610c8547]
	I0804 09:02:56.648000 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:02:56.665663 1661480 logs.go:282] 0 containers: []
	W0804 09:02:56.665677 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:02:56.665686 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:02:56.665696 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:02:56.703183 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:02:56.703200 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:02:56.757823 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:02:56.750851   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.751407   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.752950   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.753405   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.754929   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:02:56.750851   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.751407   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.752950   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.753405   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.754929   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:02:56.757834 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:02:56.757846 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:02:56.793009 1661480 logs.go:123] Gathering logs for kube-controller-manager [0bd5610c8547] ...
	I0804 09:02:56.793031 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd5610c8547"
	I0804 09:02:56.814543 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:02:56.814560 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:02:56.858353 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:02:56.858374 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:02:56.938490 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:02:56.938512 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:02:56.957429 1661480 logs.go:123] Gathering logs for kube-apiserver [535dc83f2f73] ...
	I0804 09:02:56.957445 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 535dc83f2f73"
	I0804 09:02:56.982565 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:02:56.982582 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:02:57.007749 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:02:57.007767 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:02:57.027909 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:02:57.027926 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:02:59.582075 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:04.583858 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:03:04.583974 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:04.603429 1661480 logs.go:282] 3 containers: [a20e277f239a 535dc83f2f73 a70a68ec6169]
	I0804 09:03:04.603486 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:04.621192 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:03:04.621271 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:04.638764 1661480 logs.go:282] 0 containers: []
	W0804 09:03:04.638780 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:04.638831 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:04.656957 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:04.657045 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:04.673865 1661480 logs.go:282] 0 containers: []
	W0804 09:03:04.673881 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:04.673937 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:04.691557 1661480 logs.go:282] 1 containers: [0bd5610c8547]
	I0804 09:03:04.691645 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:04.709384 1661480 logs.go:282] 0 containers: []
	W0804 09:03:04.709397 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:04.709412 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:04.709425 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:04.728509 1661480 logs.go:123] Gathering logs for kube-apiserver [535dc83f2f73] ...
	I0804 09:03:04.728525 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 535dc83f2f73"
	I0804 09:03:04.753446 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:03:04.753464 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:03:04.772841 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:04.772865 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 09:03:19.398944 1661480 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (14.626059536s)
	W0804 09:03:19.398974 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:14.821564   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:03:19.391583   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:42134->[::1]:8441: read: connection reset by peer"
	E0804 09:03:19.392195   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:19.393996   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:19.394458   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:14.821564   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:03:19.391583   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:42134->[::1]:8441: read: connection reset by peer"
	E0804 09:03:19.392195   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:19.393996   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:19.394458   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:19.398986 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:19.398996 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:19.427211 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:19.427230 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:19.452181 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:19.452199 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:19.488740 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:19.488758 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:19.543335 1661480 logs.go:123] Gathering logs for kube-controller-manager [0bd5610c8547] ...
	I0804 09:03:19.543361 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd5610c8547"
	I0804 09:03:19.564213 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:19.564229 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:19.604899 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:19.604921 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:19.642424 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:19.642448 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:22.221477 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:22.222040 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:22.222143 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:22.241050 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:22.241115 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:22.258165 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:03:22.258242 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:22.276561 1661480 logs.go:282] 0 containers: []
	W0804 09:03:22.276574 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:22.276617 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:22.295029 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:22.295092 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:22.312122 1661480 logs.go:282] 0 containers: []
	W0804 09:03:22.312132 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:22.312182 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:22.329412 1661480 logs.go:282] 2 containers: [ef4985b5f2b9 0bd5610c8547]
	I0804 09:03:22.329488 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:22.346310 1661480 logs.go:282] 0 containers: []
	W0804 09:03:22.346323 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:22.346333 1661480 logs.go:123] Gathering logs for kube-controller-manager [0bd5610c8547] ...
	I0804 09:03:22.346343 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd5610c8547"
	I0804 09:03:22.367806 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:22.367821 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:22.445841 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:22.445861 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:22.471474 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:22.471489 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:22.496759 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:03:22.496775 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:03:22.517309 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:22.517327 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:22.557714 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:22.557732 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:22.593146 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:22.593170 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:22.611504 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:22.611518 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:22.665839 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:22.658662   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.659228   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.660791   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.661206   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.662674   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:22.658662   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.659228   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.660791   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.661206   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.662674   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:22.665851 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:22.665861 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:22.702988 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:22.703006 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:22.755945 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:22.755968 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:25.277601 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:25.278136 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:25.278248 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:25.297160 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:25.297216 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:25.316643 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:03:25.316709 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:25.334387 1661480 logs.go:282] 0 containers: []
	W0804 09:03:25.334404 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:25.334454 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:25.351774 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:25.351842 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:25.369473 1661480 logs.go:282] 0 containers: []
	W0804 09:03:25.369485 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:25.369530 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:25.387080 1661480 logs.go:282] 2 containers: [ef4985b5f2b9 0bd5610c8547]
	I0804 09:03:25.387143 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:25.404296 1661480 logs.go:282] 0 containers: []
	W0804 09:03:25.404309 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:25.404318 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:25.404329 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:25.422982 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:25.422997 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:25.476224 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:25.468440   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.468969   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.470557   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.471704   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.472278   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:25.468440   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.468969   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.470557   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.471704   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.472278   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:25.476235 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:25.476245 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:25.501952 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:03:25.501972 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:03:25.522116 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:25.522135 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:25.559523 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:25.559539 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:25.611041 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:25.611060 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:25.631550 1661480 logs.go:123] Gathering logs for kube-controller-manager [0bd5610c8547] ...
	I0804 09:03:25.631569 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd5610c8547"
	I0804 09:03:25.652151 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:25.652168 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:25.726816 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:25.726837 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:25.752766 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:25.752786 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:25.796279 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:25.796296 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:28.337315 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:28.337785 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:28.337864 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:28.356559 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:28.356610 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:28.374336 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:03:28.374386 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:28.391793 1661480 logs.go:282] 0 containers: []
	W0804 09:03:28.391806 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:28.391847 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:28.410341 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:28.410399 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:28.427793 1661480 logs.go:282] 0 containers: []
	W0804 09:03:28.427809 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:28.427859 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:28.444847 1661480 logs.go:282] 2 containers: [ef4985b5f2b9 0bd5610c8547]
	I0804 09:03:28.444924 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:28.462592 1661480 logs.go:282] 0 containers: []
	W0804 09:03:28.462609 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:28.462619 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:28.462631 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:28.482600 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:28.482615 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:28.507602 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:03:28.507619 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:03:28.526984 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:28.526998 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:28.577894 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:28.577914 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:28.597919 1661480 logs.go:123] Gathering logs for kube-controller-manager [0bd5610c8547] ...
	I0804 09:03:28.597936 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd5610c8547"
	I0804 09:03:28.617782 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:28.617797 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:28.660530 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:28.660549 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:28.698114 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:28.698131 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:28.771090 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:28.771114 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:28.825345 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:28.818550   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.819081   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.820612   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.821003   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.822518   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:28.818550   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.819081   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.820612   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.821003   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.822518   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:28.825358 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:28.825372 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:28.851539 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:28.851559 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:31.390425 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:31.390852 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:31.390931 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:31.410612 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:31.410681 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:31.428091 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:03:31.428165 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:31.446602 1661480 logs.go:282] 0 containers: []
	W0804 09:03:31.446621 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:31.446675 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:31.464168 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:31.464223 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:31.481049 1661480 logs.go:282] 0 containers: []
	W0804 09:03:31.481063 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:31.481115 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:31.497227 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:03:31.497311 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:31.513575 1661480 logs.go:282] 0 containers: []
	W0804 09:03:31.513586 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:31.513594 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:31.513604 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:31.567139 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:31.558828   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.559407   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.561385   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.562296   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.563788   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:31.558828   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.559407   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.561385   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.562296   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.563788   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:31.567151 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:31.567162 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:31.591977 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:31.591994 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:31.644763 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:31.644783 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:31.664981 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:31.664997 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:31.708596 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:31.708616 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:31.734001 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:03:31.734019 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:03:31.753980 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:31.754000 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:31.789591 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:31.789609 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:31.825063 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:31.825082 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:31.904005 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:31.904027 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:34.424932 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:34.425333 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:34.425419 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:34.444542 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:34.444596 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:34.461912 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:03:34.461985 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:34.479889 1661480 logs.go:282] 0 containers: []
	W0804 09:03:34.479903 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:34.479953 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:34.497552 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:34.497604 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:34.515003 1661480 logs.go:282] 0 containers: []
	W0804 09:03:34.515014 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:34.515053 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:34.532842 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:03:34.532909 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:34.549350 1661480 logs.go:282] 0 containers: []
	W0804 09:03:34.549362 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:34.549371 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:34.549381 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:34.567689 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:34.567704 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:34.605688 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:34.605703 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:34.625847 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:34.625861 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:34.668000 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:34.668021 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:34.742105 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:34.742129 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:34.797022 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:34.790082   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.790655   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.792223   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.792752   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.794335   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:34.790082   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.790655   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.792223   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.792752   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.794335   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:34.797034 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:34.797047 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:34.822397 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:34.822417 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:34.849317 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:03:34.849334 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:03:34.869225 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:34.869259 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:34.923527 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:34.923548 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:37.459936 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:37.460377 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:37.460466 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:37.479380 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:37.479441 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:37.497080 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:03:37.497149 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:37.514761 1661480 logs.go:282] 0 containers: []
	W0804 09:03:37.514778 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:37.514824 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:37.532588 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:37.532656 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:37.550208 1661480 logs.go:282] 0 containers: []
	W0804 09:03:37.550224 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:37.550275 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:37.568463 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:03:37.568527 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:37.585787 1661480 logs.go:282] 0 containers: []
	W0804 09:03:37.585800 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:37.585809 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:37.585821 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:37.659045 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:37.659073 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:37.685717 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:03:37.685735 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:03:37.704291 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:37.704307 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:37.741922 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:37.741943 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:37.793694 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:37.793713 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:37.813368 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:37.813385 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:37.848883 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:37.848900 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:37.867491 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:37.867505 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:37.921199 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:37.913356   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.913927   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.916144   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.916563   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.918058   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:37.913356   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.913927   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.916144   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.916563   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.918058   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:37.921219 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:37.921231 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:37.947342 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:37.947359 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:40.489125 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:40.489554 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:40.489630 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:40.508607 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:40.508669 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:40.528138 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:03:40.528187 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:40.545305 1661480 logs.go:282] 0 containers: []
	W0804 09:03:40.545318 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:40.545357 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:40.562122 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:40.562191 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:40.579129 1661480 logs.go:282] 0 containers: []
	W0804 09:03:40.579144 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:40.579191 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:40.597048 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:03:40.597124 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:40.614353 1661480 logs.go:282] 0 containers: []
	W0804 09:03:40.614368 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:40.614378 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:03:40.614390 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:03:40.634206 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:40.634222 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:40.653989 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:40.654006 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:40.672246 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:40.672260 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:40.726229 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:40.719031   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.719524   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.721096   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.721545   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.723074   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:40.719031   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.719524   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.721096   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.721545   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.723074   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:40.726242 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:40.726257 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:40.766179 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:40.766200 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:40.821048 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:40.821069 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:40.864128 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:40.864147 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:40.900068 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:40.900085 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:40.973288 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:40.973310 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:41.000020 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:41.000039 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:43.525994 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:43.526421 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:43.526503 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:43.545290 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:43.545349 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:43.562985 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:03:43.563038 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:43.579516 1661480 logs.go:282] 0 containers: []
	W0804 09:03:43.579532 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:43.579582 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:43.597186 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:43.597261 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:43.613554 1661480 logs.go:282] 0 containers: []
	W0804 09:03:43.613568 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:43.613609 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:43.631061 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:03:43.631120 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:43.649100 1661480 logs.go:282] 0 containers: []
	W0804 09:03:43.649114 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:43.649125 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:43.649144 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:43.667561 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:43.667577 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:43.721973 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:43.714008   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.714530   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.717089   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.717552   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.719095   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:43.714008   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.714530   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.717089   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.717552   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.719095   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:43.721984 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:03:43.721995 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:03:43.742540 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:43.742556 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:43.780241 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:43.780259 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:43.834318 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:43.834339 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:43.869987 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:43.870005 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:43.946032 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:43.946053 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:43.973679 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:43.973697 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:43.998917 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:43.998935 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:44.019361 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:44.019378 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:46.564446 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:46.564898 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:46.564992 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:46.584902 1661480 logs.go:282] 3 containers: [20f5be32354b a20e277f239a a70a68ec6169]
	I0804 09:03:46.585028 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:46.610427 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:03:46.610492 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:46.627832 1661480 logs.go:282] 0 containers: []
	W0804 09:03:46.627848 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:46.627896 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:46.662895 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:46.662956 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:46.679864 1661480 logs.go:282] 0 containers: []
	W0804 09:03:46.679882 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:46.679929 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:46.697936 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:03:46.697999 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:46.716993 1661480 logs.go:282] 0 containers: []
	W0804 09:03:46.717008 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:46.717020 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:46.717029 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:46.790622 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:46.790643 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:46.809548 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:46.809566 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 09:04:08.045069 1661480 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.235482683s)
	W0804 09:04:08.045100 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:56.860697   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:04:06.861827   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:04:08.039221   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:51136->[::1]:8441: read: connection reset by peer"
	E0804 09:04:08.039948   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:08.041660   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:56.860697   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:04:06.861827   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:04:08.039221   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:51136->[::1]:8441: read: connection reset by peer"
	E0804 09:04:08.039948   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:08.041660   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:08.045109 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:08.045120 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:08.071094 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:04:08.071112 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	W0804 09:04:08.089428 1661480 logs.go:130] failed kube-apiserver [a20e277f239a]: command: /bin/bash -c "docker logs --tail 400 a20e277f239a" /bin/bash -c "docker logs --tail 400 a20e277f239a": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: a20e277f239a
	 output: 
	** stderr ** 
	Error response from daemon: No such container: a20e277f239a
	
	** /stderr **
	I0804 09:04:08.089437 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:08.089448 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:08.129150 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:08.129169 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:08.185332 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:08.185356 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:08.207810 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:08.207830 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:08.233521 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:08.233539 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:08.253969 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:08.253985 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:08.299455 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:08.299476 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:10.840062 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:10.840666 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:10.840762 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:10.860521 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:10.860576 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:10.877749 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:10.877804 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:10.894797 1661480 logs.go:282] 0 containers: []
	W0804 09:04:10.894809 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:10.894851 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:10.911920 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:10.911993 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:10.929397 1661480 logs.go:282] 0 containers: []
	W0804 09:04:10.929412 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:10.929461 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:10.947092 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:04:10.947149 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:10.964066 1661480 logs.go:282] 0 containers: []
	W0804 09:04:10.964083 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:10.964095 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:10.964107 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:10.983914 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:10.983930 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:11.020490 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:11.020510 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:11.039187 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:11.039203 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:11.095001 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:11.087446   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.087938   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.089522   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.089962   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.091585   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:11.087446   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.087938   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.089522   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.089962   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.091585   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:11.095012 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:11.095022 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:11.120789 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:11.120807 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:11.146008 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:11.146024 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:11.166112 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:11.166128 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:11.204792 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:11.204810 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:11.249456 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:11.249479 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:11.325884 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:11.325911 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:13.884709 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:13.885223 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:13.885353 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:13.904359 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:13.904417 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:13.922238 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:13.922302 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:13.939358 1661480 logs.go:282] 0 containers: []
	W0804 09:04:13.939372 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:13.939426 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:13.956853 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:13.956910 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:13.974857 1661480 logs.go:282] 0 containers: []
	W0804 09:04:13.974869 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:13.974908 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:13.992568 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:04:13.992628 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:14.009924 1661480 logs.go:282] 0 containers: []
	W0804 09:04:14.009937 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:14.009947 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:14.009962 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:14.061962 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:14.061980 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:14.105751 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:14.105768 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:14.159867 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:14.152559   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.153066   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.154592   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.154981   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.156381   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:14.152559   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.153066   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.154592   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.154981   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.156381   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:14.159880 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:14.159892 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:14.180879 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:14.180897 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:14.223204 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:14.223223 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:14.244081 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:14.244097 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:14.279867 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:14.279884 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:14.357345 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:14.357368 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:14.375771 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:14.375787 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:14.401599 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:14.401615 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:16.929311 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:16.929726 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:16.929806 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:16.949884 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:16.949946 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:16.966827 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:16.966875 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:16.984179 1661480 logs.go:282] 0 containers: []
	W0804 09:04:16.984194 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:16.984241 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:17.001543 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:17.001596 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:17.018974 1661480 logs.go:282] 0 containers: []
	W0804 09:04:17.018985 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:17.019032 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:17.037024 1661480 logs.go:282] 2 containers: [9d4ac6608b3c ef4985b5f2b9]
	I0804 09:04:17.037087 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:17.067627 1661480 logs.go:282] 0 containers: []
	W0804 09:04:17.067640 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:17.067650 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:17.067662 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:17.089231 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:17.089266 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:17.145083 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:17.137004   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.137530   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.139081   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.139547   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.141048   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:17.137004   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.137530   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.139081   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.139547   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.141048   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:17.145095 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:17.145107 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:17.183037 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:17.183057 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:17.224495 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:17.224513 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:17.277939 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:17.277961 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:17.299213 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:17.299229 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:17.343379 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:17.343397 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:17.368834 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:17.368850 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:17.388736 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:17.388752 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:17.408859 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:17.408875 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:17.445491 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:17.445507 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:20.023254 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:20.023726 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:20.023805 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:20.042775 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:20.042834 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:20.060600 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:20.060658 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:20.078019 1661480 logs.go:282] 0 containers: []
	W0804 09:04:20.078036 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:20.078074 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:20.096002 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:20.096071 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:20.112684 1661480 logs.go:282] 0 containers: []
	W0804 09:04:20.112698 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:20.112741 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:20.130951 1661480 logs.go:282] 2 containers: [9d4ac6608b3c ef4985b5f2b9]
	I0804 09:04:20.131021 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:20.147664 1661480 logs.go:282] 0 containers: []
	W0804 09:04:20.147675 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:20.147685 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:20.147696 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:20.166143 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:20.166161 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:20.221888 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:20.214386   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.214988   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.216543   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.216938   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.218460   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:20.214386   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.214988   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.216543   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.216938   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.218460   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:20.221899 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:20.221912 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:20.247606 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:20.247623 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:20.269435 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:20.269454 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:20.322915 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:20.322934 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:20.344869 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:20.344885 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:20.388193 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:20.388210 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:20.424170 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:20.424187 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:20.496074 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:20.496094 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:20.522349 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:20.522368 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:20.563687 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:20.563710 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:23.085074 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:23.085599 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:23.085689 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:23.104776 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:23.104833 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:23.122616 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:23.122682 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:23.140381 1661480 logs.go:282] 0 containers: []
	W0804 09:04:23.140396 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:23.140449 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:23.158043 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:23.158105 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:23.175945 1661480 logs.go:282] 0 containers: []
	W0804 09:04:23.175960 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:23.176004 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:23.193909 1661480 logs.go:282] 2 containers: [9d4ac6608b3c ef4985b5f2b9]
	I0804 09:04:23.193981 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:23.211258 1661480 logs.go:282] 0 containers: []
	W0804 09:04:23.211272 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:23.211282 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:23.211292 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:23.236427 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:23.236445 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:23.275922 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:23.275944 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:23.296315 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:23.296332 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:23.317009 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:23.317026 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:23.357932 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:23.357953 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:23.394105 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:23.394122 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:23.467404 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:23.467423 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:23.494717 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:23.494734 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:23.515040 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:23.515055 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:23.566202 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:23.566221 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:23.586603 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:23.586621 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:23.640949 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:23.633581   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.634121   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.635682   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.636105   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.637658   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:23.633581   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.634121   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.635682   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.636105   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.637658   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:26.142544 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:26.143011 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:26.143111 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:26.163238 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:26.163305 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:26.181526 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:26.181598 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:26.198994 1661480 logs.go:282] 0 containers: []
	W0804 09:04:26.199008 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:26.199055 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:26.216773 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:26.216843 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:26.234131 1661480 logs.go:282] 0 containers: []
	W0804 09:04:26.234150 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:26.234204 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:26.251698 1661480 logs.go:282] 2 containers: [9d4ac6608b3c ef4985b5f2b9]
	I0804 09:04:26.251757 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:26.269113 1661480 logs.go:282] 0 containers: []
	W0804 09:04:26.269125 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:26.269136 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:26.269147 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:26.309761 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:26.309780 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:26.362115 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:26.362133 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:26.382406 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:26.382421 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:26.427317 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:26.427338 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:26.445864 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:26.445879 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:26.470826 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:26.470845 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:26.490799 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:26.490814 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:26.526252 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:26.526276 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:26.599966 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:26.599993 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:26.655307 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:26.648488   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.649034   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.650536   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.650909   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.652405   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:26.648488   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.649034   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.650536   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.650909   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.652405   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:26.655322 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:26.655332 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:26.680910 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:26.680927 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:29.201316 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:29.201803 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:29.201888 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:29.220916 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:29.220981 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:29.240273 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:29.240334 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:29.258749 1661480 logs.go:282] 0 containers: []
	W0804 09:04:29.258769 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:29.258820 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:29.276728 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:29.276789 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:29.294103 1661480 logs.go:282] 0 containers: []
	W0804 09:04:29.294118 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:29.294162 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:29.312051 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:29.312121 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:29.329450 1661480 logs.go:282] 0 containers: []
	W0804 09:04:29.329463 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:29.329472 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:29.329482 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:29.406478 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:29.406501 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:29.449867 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:29.449885 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:29.505732 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:29.505753 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:29.527260 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:29.527278 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:29.568876 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:29.568900 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:29.588395 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:29.588411 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:29.642645 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:29.635519   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.636038   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.637658   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.638071   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.639537   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:29.635519   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.636038   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.637658   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.638071   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.639537   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:29.642654 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:29.642665 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:29.668637 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:29.668654 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:29.693869 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:29.693888 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:29.714488 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:29.714503 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:32.250740 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:32.251210 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:32.251290 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:32.270825 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:32.270884 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:32.288747 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:32.288802 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:32.306493 1661480 logs.go:282] 0 containers: []
	W0804 09:04:32.306505 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:32.306552 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:32.323960 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:32.324014 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:32.341171 1661480 logs.go:282] 0 containers: []
	W0804 09:04:32.341187 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:32.341230 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:32.358803 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:32.358860 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:32.375636 1661480 logs.go:282] 0 containers: []
	W0804 09:04:32.375647 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:32.375657 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:32.375670 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:32.395884 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:32.395899 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:32.438480 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:32.438499 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:32.482900 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:32.482918 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:32.518645 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:32.518662 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:32.591929 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:32.591950 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:32.644879 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:32.644899 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:32.665398 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:32.665413 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:32.684813 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:32.684830 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:32.738309 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:32.731481   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.731997   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.733547   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.733950   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.735467   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:32.731481   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.731997   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.733547   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.733950   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.735467   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:32.738320 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:32.738331 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:32.763969 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:32.763987 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:35.291352 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:35.291810 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:35.291895 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:35.311568 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:35.311636 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:35.329568 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:35.329650 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:35.347266 1661480 logs.go:282] 0 containers: []
	W0804 09:04:35.347276 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:35.347315 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:35.364992 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:35.365054 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:35.381643 1661480 logs.go:282] 0 containers: []
	W0804 09:04:35.381657 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:35.381696 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:35.398762 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:35.398830 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:35.415553 1661480 logs.go:282] 0 containers: []
	W0804 09:04:35.415568 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:35.415579 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:35.415590 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:35.434052 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:35.434066 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:35.488645 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:35.481621   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.482093   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.483610   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.483982   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.485495   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:35.481621   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.482093   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.483610   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.483982   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.485495   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:35.488656 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:35.488666 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:35.532366 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:35.532384 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:35.552538 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:35.552555 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:35.588052 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:35.588072 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:35.666164 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:35.666184 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:35.693682 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:35.693700 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:35.718989 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:35.719004 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:35.739132 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:35.739149 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:35.792779 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:35.792799 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:38.337951 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:38.338399 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:38.338478 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:38.357165 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:38.357226 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:38.374097 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:38.374155 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:38.391382 1661480 logs.go:282] 0 containers: []
	W0804 09:04:38.391396 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:38.391442 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:38.408993 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:38.409051 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:38.426050 1661480 logs.go:282] 0 containers: []
	W0804 09:04:38.426065 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:38.426108 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:38.443913 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:38.443969 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:38.460846 1661480 logs.go:282] 0 containers: []
	W0804 09:04:38.460858 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:38.460868 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:38.460883 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:38.538741 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:38.538763 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:38.557324 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:38.557344 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:38.611322 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:38.604134   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.604668   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.606185   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.606583   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.607975   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:38.604134   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.604668   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.606185   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.606583   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.607975   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:38.611333 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:38.611344 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:38.651785 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:38.651803 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:38.704282 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:38.704300 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:38.748296 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:38.748316 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:38.788934 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:38.788954 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:38.813911 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:38.813928 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:38.838936 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:38.838953 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:38.858717 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:38.858736 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:41.379671 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:41.380124 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:41.380209 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:41.398983 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:41.399040 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:41.417150 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:41.417203 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:41.434806 1661480 logs.go:282] 0 containers: []
	W0804 09:04:41.434819 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:41.434860 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:41.452250 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:41.452314 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:41.469520 1661480 logs.go:282] 0 containers: []
	W0804 09:04:41.469535 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:41.469583 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:41.487739 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:41.487809 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:41.505191 1661480 logs.go:282] 0 containers: []
	W0804 09:04:41.505207 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:41.505219 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:41.505231 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:41.525061 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:41.525078 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:41.560648 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:41.560665 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:41.586056 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:41.586076 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:41.606348 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:41.606364 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:41.647048 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:41.647072 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:41.688983 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:41.689004 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:41.770298 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:41.770332 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:41.790956 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:41.790978 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:41.845157 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:41.838079   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.838593   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.840185   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.840709   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.842215   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:41.838079   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.838593   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.840185   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.840709   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.842215   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:41.845168 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:41.845179 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:41.870756 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:41.870774 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:44.425368 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:44.425831 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:44.425949 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:44.446645 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:44.446699 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:44.464564 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:44.464619 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:44.482513 1661480 logs.go:282] 0 containers: []
	W0804 09:04:44.482525 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:44.482568 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:44.500219 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:44.500270 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:44.517554 1661480 logs.go:282] 0 containers: []
	W0804 09:04:44.517571 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:44.517623 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:44.535531 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:44.535609 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:44.552895 1661480 logs.go:282] 0 containers: []
	W0804 09:04:44.552911 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:44.552922 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:44.552937 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:44.588906 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:44.588923 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:44.668044 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:44.668073 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:44.688833 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:44.688850 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:44.744103 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:44.737229   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.737782   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.739326   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.739679   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.741202   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:44.737229   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.737782   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.739326   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.739679   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.741202   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:44.744120 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:44.744132 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:44.771558 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:44.771575 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:44.798390 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:44.798407 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:44.818712 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:44.818730 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:44.860754 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:44.860771 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:44.903154 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:44.903172 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:44.959593 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:44.959614 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:47.481798 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:47.482267 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:47.482394 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:47.501436 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:47.501507 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:47.519403 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:47.519456 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:47.536505 1661480 logs.go:282] 0 containers: []
	W0804 09:04:47.536517 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:47.536559 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:47.555052 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:47.555108 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:47.572292 1661480 logs.go:282] 0 containers: []
	W0804 09:04:47.572308 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:47.572378 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:47.589316 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:47.589387 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:47.606568 1661480 logs.go:282] 0 containers: []
	W0804 09:04:47.606583 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:47.606592 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:47.606605 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:47.660924 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:47.654305   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.654756   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.656225   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.656600   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.658040   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:47.654305   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.654756   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.656225   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.656600   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.658040   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:47.660934 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:47.660945 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:47.686316 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:47.686336 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:47.711494 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:47.711510 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:47.755256 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:47.755279 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:47.808519 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:47.808541 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:47.829575 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:47.829592 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:47.850735 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:47.850752 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:47.892056 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:47.892076 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:47.929604 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:47.929623 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:48.003755 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:48.003779 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:50.522949 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:50.523426 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:50.523511 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:50.542559 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:50.542623 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:50.561817 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:50.561873 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:50.580293 1661480 logs.go:282] 0 containers: []
	W0804 09:04:50.580306 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:50.580358 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:50.598065 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:50.598132 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:50.615051 1661480 logs.go:282] 0 containers: []
	W0804 09:04:50.615064 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:50.615102 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:50.634158 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:50.634219 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:50.651067 1661480 logs.go:282] 0 containers: []
	W0804 09:04:50.651079 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:50.651088 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:50.651098 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:50.675452 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:50.675468 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:50.696108 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:50.696124 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:50.739266 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:50.739285 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:50.757817 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:50.757839 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:50.812181 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:50.805280   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.805733   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.807319   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.807746   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.809261   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:50.805280   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.805733   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.807319   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.807746   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.809261   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:50.812192 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:50.812204 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:50.837813 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:50.837830 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:50.881332 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:50.881350 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:50.933150 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:50.933172 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:50.955107 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:50.955127 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:50.991284 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:50.991302 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:53.570964 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:53.571444 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:53.571539 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:53.591352 1661480 logs.go:282] 3 containers: [45dd8fe239bc 20f5be32354b a70a68ec6169]
	I0804 09:04:53.591419 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:53.610707 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:53.610764 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:53.630949 1661480 logs.go:282] 0 containers: []
	W0804 09:04:53.630964 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:53.631011 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:53.665523 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:53.665599 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:53.683393 1661480 logs.go:282] 0 containers: []
	W0804 09:04:53.683410 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:53.683463 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:53.700974 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:53.701080 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:53.719520 1661480 logs.go:282] 0 containers: []
	W0804 09:04:53.719534 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:53.719543 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:53.719556 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:53.801389 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:53.801410 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 09:05:15.553212 1661480 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.751766465s)
	W0804 09:05:15.553274 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:03.857554   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:05:13.859266   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:05:15.547844   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:55018->[::1]:8441: read: connection reset by peer"
	E0804 09:05:15.548469   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:15.550082   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:03.857554   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:05:13.859266   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:05:15.547844   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:55018->[::1]:8441: read: connection reset by peer"
	E0804 09:05:15.548469   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:15.550082   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:15.553282 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:05:15.553295 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	W0804 09:05:15.571925 1661480 logs.go:130] failed kube-apiserver [20f5be32354b]: command: /bin/bash -c "docker logs --tail 400 20f5be32354b" /bin/bash -c "docker logs --tail 400 20f5be32354b": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 20f5be32354b
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 20f5be32354b
	
	** /stderr **
	I0804 09:05:15.571940 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:15.571956 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:15.597489 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:05:15.597508 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	W0804 09:05:15.615861 1661480 logs.go:130] failed etcd [e4c966ab8463]: command: /bin/bash -c "docker logs --tail 400 e4c966ab8463" /bin/bash -c "docker logs --tail 400 e4c966ab8463": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: e4c966ab8463
	 output: 
	** stderr ** 
	Error response from daemon: No such container: e4c966ab8463
	
	** /stderr **
	I0804 09:05:15.615870 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:15.615881 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:15.658508 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:15.658527 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:15.710914 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:15.710934 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:15.756829 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:15.756848 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:15.775591 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:15.775608 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:15.802209 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:15.802225 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:15.822675 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:15.822691 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:18.362881 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:18.363337 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:18.363427 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:18.382725 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:18.382780 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:18.400834 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:18.400903 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:18.418630 1661480 logs.go:282] 0 containers: []
	W0804 09:05:18.418643 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:18.418699 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:18.436449 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:18.436510 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:18.453593 1661480 logs.go:282] 0 containers: []
	W0804 09:05:18.453609 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:18.453670 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:18.470809 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:18.470867 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:18.487902 1661480 logs.go:282] 0 containers: []
	W0804 09:05:18.487915 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:18.487925 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:18.487935 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:18.570521 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:18.570543 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:18.625182 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:18.618258   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.618805   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.620328   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.620711   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.622272   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:18.618258   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.618805   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.620328   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.620711   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.622272   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:18.625193 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:18.625204 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:18.651165 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:18.651185 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:18.671188 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:18.671203 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:18.714383 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:18.714403 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:18.750997 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:18.751016 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:18.769854 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:18.769870 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:18.795165 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:18.795180 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:18.849360 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:18.849380 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:18.871229 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:18.871254 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:21.418353 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:21.418833 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:21.418922 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:21.438054 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:21.438113 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:21.455587 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:21.455654 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:21.472934 1661480 logs.go:282] 0 containers: []
	W0804 09:05:21.472954 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:21.473001 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:21.491717 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:21.491795 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:21.509543 1661480 logs.go:282] 0 containers: []
	W0804 09:05:21.509559 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:21.509604 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:21.527160 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:21.527217 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:21.544207 1661480 logs.go:282] 0 containers: []
	W0804 09:05:21.544222 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:21.544234 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:21.544243 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:21.563890 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:21.563904 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:21.583720 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:21.583737 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:21.602128 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:21.602141 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:21.658059 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:21.650567   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.651103   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.652665   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.653107   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.654674   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:21.650567   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.651103   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.652665   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.653107   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.654674   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:21.658074 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:21.658084 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:21.685555 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:21.685574 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:21.712525 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:21.712541 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:21.756390 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:21.756410 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:21.810403 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:21.810424 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:21.853991 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:21.854013 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:21.889567 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:21.889585 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:24.473851 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:24.474320 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:24.474415 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:24.493643 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:24.493706 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:24.511933 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:24.511991 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:24.529775 1661480 logs.go:282] 0 containers: []
	W0804 09:05:24.529790 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:24.529844 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:24.547893 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:24.547953 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:24.565265 1661480 logs.go:282] 0 containers: []
	W0804 09:05:24.565280 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:24.565322 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:24.582372 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:24.582439 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:24.600116 1661480 logs.go:282] 0 containers: []
	W0804 09:05:24.600132 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:24.600144 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:24.600157 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:24.625394 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:24.625413 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:24.649921 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:24.649938 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:24.669931 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:24.669947 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:24.724632 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:24.717099   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.717627   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.719144   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.719576   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.721085   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:24.717099   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.717627   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.719144   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.719576   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.721085   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:24.724643 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:24.724654 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:24.745114 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:24.745130 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:24.791138 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:24.791159 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:24.844211 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:24.844232 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:24.864815 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:24.864831 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:24.905868 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:24.905889 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:24.944193 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:24.944210 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:27.526606 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:27.527052 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:27.527133 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:27.546023 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:27.546102 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:27.564059 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:27.564125 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:27.581355 1661480 logs.go:282] 0 containers: []
	W0804 09:05:27.581372 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:27.581421 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:27.598969 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:27.599042 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:27.616326 1661480 logs.go:282] 0 containers: []
	W0804 09:05:27.616340 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:27.616398 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:27.633567 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:27.633636 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:27.650100 1661480 logs.go:282] 0 containers: []
	W0804 09:05:27.650116 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:27.650129 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:27.650143 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:27.674675 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:27.674691 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:27.694432 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:27.694452 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:27.740275 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:27.740293 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:27.792672 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:27.792692 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:27.837134 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:27.837152 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:27.862402 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:27.862418 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:27.884136 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:27.884160 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:27.921302 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:27.921320 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:28.005198 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:28.005221 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:28.024305 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:28.024319 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:28.078812 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:28.071766   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.072266   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.073814   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.074266   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.075728   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:28.071766   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.072266   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.073814   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.074266   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.075728   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:30.579425 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:30.579882 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:30.579979 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:30.599053 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:30.599118 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:30.616639 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:30.616706 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:30.634419 1661480 logs.go:282] 0 containers: []
	W0804 09:05:30.634434 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:30.634478 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:30.652037 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:30.652091 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:30.668537 1661480 logs.go:282] 0 containers: []
	W0804 09:05:30.668550 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:30.668601 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:30.686111 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:30.686177 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:30.703170 1661480 logs.go:282] 0 containers: []
	W0804 09:05:30.703183 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:30.703197 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:30.703208 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:30.780512 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:30.780534 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:30.835862 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:30.828571   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.829089   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.830648   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.831084   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.832656   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:30.828571   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.829089   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.830648   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.831084   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.832656   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:30.835871 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:30.835884 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:30.862953 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:30.862971 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:30.906430 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:30.906449 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:30.962204 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:30.962222 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:30.983077 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:30.983098 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:31.027250 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:31.027271 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:31.064477 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:31.064493 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:31.082683 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:31.082700 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:31.107897 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:31.107916 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:33.629309 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:33.629783 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:33.629874 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:33.649062 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:33.649144 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:33.667342 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:33.667406 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:33.684879 1661480 logs.go:282] 0 containers: []
	W0804 09:05:33.684891 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:33.684936 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:33.702256 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:33.702310 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:33.719436 1661480 logs.go:282] 0 containers: []
	W0804 09:05:33.719447 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:33.719486 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:33.737005 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:33.737062 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:33.754700 1661480 logs.go:282] 0 containers: []
	W0804 09:05:33.754716 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:33.754728 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:33.754740 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:33.830846 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:33.830868 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:33.856980 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:33.856997 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:33.909389 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:33.909410 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:33.929778 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:33.929794 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:33.965678 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:33.965696 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:33.984178 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:33.984194 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:34.038018 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:34.031060   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.031554   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.033042   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.033546   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.035064   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:34.031060   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.031554   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.033042   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.033546   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.035064   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:34.038028 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:34.038040 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:34.065147 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:34.065164 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:34.085201 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:34.085217 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:34.131576 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:34.131598 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:36.677320 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:36.677738 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:36.677816 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:36.696778 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:36.696834 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:36.714338 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:36.714400 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:36.731585 1661480 logs.go:282] 0 containers: []
	W0804 09:05:36.731597 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:36.731648 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:36.749262 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:36.749323 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:36.766369 1661480 logs.go:282] 0 containers: []
	W0804 09:05:36.766382 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:36.766424 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:36.783683 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:36.783747 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:36.800562 1661480 logs.go:282] 0 containers: []
	W0804 09:05:36.800577 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:36.800589 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:36.800601 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:36.826322 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:36.826341 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:36.846705 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:36.846725 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:36.900647 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:36.900670 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:36.945061 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:36.945082 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:36.980935 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:36.980953 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:36.999355 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:36.999370 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:37.045302 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:37.045321 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:37.066069 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:37.066087 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:37.147619 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:37.147641 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:37.204004 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:37.196190   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.197826   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.198292   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.199819   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.200207   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:37.196190   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.197826   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.198292   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.199819   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.200207   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:37.204017 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:37.204029 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:39.729976 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:39.730386 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:39.730457 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:39.749322 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:39.749391 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:39.767341 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:39.767399 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:39.783917 1661480 logs.go:282] 0 containers: []
	W0804 09:05:39.783928 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:39.783968 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:39.801060 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:39.801127 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:39.818194 1661480 logs.go:282] 0 containers: []
	W0804 09:05:39.818205 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:39.818259 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:39.835049 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:39.835119 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:39.851781 1661480 logs.go:282] 0 containers: []
	W0804 09:05:39.851792 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:39.851802 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:39.851811 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:39.871504 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:39.871519 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:39.926544 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:39.919634   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.920101   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.921669   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.922050   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.923665   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:39.919634   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.920101   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.921669   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.922050   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.923665   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:39.926554 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:39.926565 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:39.952624 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:39.952638 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:39.972011 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:39.972027 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:40.025874 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:40.025896 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:40.109801 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:40.109821 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:40.136255 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:40.136272 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:40.183580 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:40.183599 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:40.204493 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:40.204511 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:40.248273 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:40.248291 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:42.784699 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:42.785199 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:42.785329 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:42.804095 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:42.804174 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:42.821904 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:42.821955 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:42.839033 1661480 logs.go:282] 0 containers: []
	W0804 09:05:42.839045 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:42.839085 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:42.857060 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:42.857129 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:42.874536 1661480 logs.go:282] 0 containers: []
	W0804 09:05:42.874549 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:42.874606 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:42.892601 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:42.892659 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:42.910100 1661480 logs.go:282] 0 containers: []
	W0804 09:05:42.910120 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:42.910129 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:42.910139 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:42.934869 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:42.934885 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:42.953955 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:42.953974 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:43.006663 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:43.006683 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:43.053918 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:43.053939 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:43.090417 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:43.090434 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:43.174196 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:43.174219 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:43.192681 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:43.192699 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:43.248572 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:43.241692   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.242267   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.243809   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.244176   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.245595   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:43.241692   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.242267   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.243809   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.244176   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.245595   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:43.248582 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:43.248595 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:43.273840 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:43.273857 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:43.317403 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:43.317424 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:45.839142 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:45.839624 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:45.839725 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:45.858871 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:45.858933 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:45.877176 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:45.877228 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:45.894585 1661480 logs.go:282] 0 containers: []
	W0804 09:05:45.894599 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:45.894640 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:45.911858 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:45.911915 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:45.929219 1661480 logs.go:282] 0 containers: []
	W0804 09:05:45.929231 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:45.929293 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:45.946407 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:45.946463 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:45.964503 1661480 logs.go:282] 0 containers: []
	W0804 09:05:45.964514 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:45.964524 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:45.964532 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:46.041227 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:46.041258 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:46.096253 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:46.089547   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.090076   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.091586   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.091864   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.093286   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:46.089547   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.090076   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.091586   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.091864   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.093286   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:46.096264 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:46.096275 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:46.121027 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:46.121043 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:46.140652 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:46.140668 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:46.184099 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:46.184117 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:46.239471 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:46.239498 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:46.260203 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:46.260218 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:46.304661 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:46.304683 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:46.322929 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:46.322946 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:46.349597 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:46.349614 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:48.889394 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:48.889879 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:48.889967 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:48.909391 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:48.909453 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:48.927208 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:48.927271 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:48.944578 1661480 logs.go:282] 0 containers: []
	W0804 09:05:48.944589 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:48.944627 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:48.962359 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:48.962441 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:48.979597 1661480 logs.go:282] 0 containers: []
	W0804 09:05:48.979608 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:48.979646 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:48.996244 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:48.996323 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:49.013599 1661480 logs.go:282] 0 containers: []
	W0804 09:05:49.013613 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:49.013624 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:49.013644 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:49.033537 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:49.033554 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:49.086196 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:49.086216 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:49.106369 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:49.106383 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:49.141789 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:49.141805 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:49.221717 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:49.221741 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:49.276646 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:49.269311   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.269820   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.271422   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.271819   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.273274   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:49.269311   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.269820   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.271422   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.271819   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.273274   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:49.276656 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:49.276670 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:49.321356 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:49.321377 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:49.365595 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:49.365613 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:49.384099 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:49.384117 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:49.411209 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:49.411228 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:51.937395 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:51.937838 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:51.937922 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:51.956704 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:51.956769 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:51.974346 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:51.974399 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:51.991495 1661480 logs.go:282] 0 containers: []
	W0804 09:05:51.991507 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:51.991549 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:52.011643 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:52.011711 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:52.029478 1661480 logs.go:282] 0 containers: []
	W0804 09:05:52.029490 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:52.029540 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:52.046644 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:52.046722 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:52.064950 1661480 logs.go:282] 0 containers: []
	W0804 09:05:52.064963 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:52.064974 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:52.064986 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:52.121641 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:52.121666 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:52.207435 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:52.207466 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:52.234341 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:52.234364 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:52.254927 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:52.254946 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:52.298877 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:52.298897 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:52.334848 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:52.334867 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:52.353549 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:52.353565 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:52.406664 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:52.399095   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.399713   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.400815   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.402371   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.402719   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:52.399095   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.399713   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.400815   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.402371   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.402719   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:52.406679 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:52.406689 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:52.432229 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:52.432246 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:52.451833 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:52.451848 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:55.009056 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:55.009576 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:55.009639 1661480 kubeadm.go:593] duration metric: took 4m5.290563198s to restartPrimaryControlPlane
	W0804 09:05:55.009718 1661480 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0804 09:05:55.009762 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0804 09:05:55.871445 1661480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 09:05:55.882275 1661480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 09:05:55.890471 1661480 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0804 09:05:55.890520 1661480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 09:05:55.898415 1661480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 09:05:55.898428 1661480 kubeadm.go:157] found existing configuration files:
	
	I0804 09:05:55.898465 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0804 09:05:55.906151 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 09:05:55.906189 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 09:05:55.913607 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0804 09:05:55.921040 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 09:05:55.921073 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 09:05:55.928201 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0804 09:05:55.936065 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 09:05:55.936113 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 09:05:55.943534 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0804 09:05:55.951211 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 09:05:55.951253 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 09:05:55.958383 1661480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0804 09:05:55.991847 1661480 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0-beta.0
	I0804 09:05:55.991901 1661480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 09:05:56.004623 1661480 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0804 09:05:56.004692 1661480 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0804 09:05:56.004732 1661480 kubeadm.go:310] OS: Linux
	I0804 09:05:56.004768 1661480 kubeadm.go:310] CGROUPS_CPU: enabled
	I0804 09:05:56.004807 1661480 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0804 09:05:56.004862 1661480 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0804 09:05:56.004941 1661480 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0804 09:05:56.005006 1661480 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0804 09:05:56.005083 1661480 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0804 09:05:56.005137 1661480 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0804 09:05:56.005193 1661480 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0804 09:05:56.005278 1661480 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0804 09:05:56.054357 1661480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 09:05:56.054479 1661480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 09:05:56.054635 1661480 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 09:05:56.064998 1661480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 09:05:56.067952 1661480 out.go:235]   - Generating certificates and keys ...
	I0804 09:05:56.068027 1661480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 09:05:56.068074 1661480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 09:05:56.068144 1661480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 09:05:56.068209 1661480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 09:05:56.068279 1661480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 09:05:56.068322 1661480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 09:05:56.068385 1661480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 09:05:56.068433 1661480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 09:05:56.068492 1661480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 09:05:56.068549 1661480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 09:05:56.068580 1661480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 09:05:56.068624 1661480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 09:05:56.846466 1661480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 09:05:57.293494 1661480 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 09:05:57.586648 1661480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 09:05:57.707352 1661480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 09:05:58.140308 1661480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 09:05:58.141365 1661480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 09:05:58.143879 1661480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 09:05:58.146322 1661480 out.go:235]   - Booting up control plane ...
	I0804 09:05:58.146440 1661480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 09:05:58.146521 1661480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 09:05:58.146580 1661480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 09:05:58.157812 1661480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 09:05:58.157949 1661480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0804 09:05:58.163040 1661480 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0804 09:05:58.163314 1661480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 09:05:58.163387 1661480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 09:05:58.241217 1661480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 09:05:58.241378 1661480 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 09:05:59.242975 1661480 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001870906s
	I0804 09:05:59.246768 1661480 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0804 09:05:59.246925 1661480 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I0804 09:05:59.247072 1661480 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0804 09:05:59.247191 1661480 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0804 09:06:00.899560 1661480 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.652491519s
	I0804 09:06:31.896796 1661480 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 32.64974442s
	I0804 09:09:59.247676 1661480 kubeadm.go:310] [control-plane-check] kube-apiserver is not healthy after 4m0.000445769s
	I0804 09:09:59.247761 1661480 kubeadm.go:310] 
	I0804 09:09:59.247995 1661480 kubeadm.go:310] A control plane component may have crashed or exited when started by the container runtime.
	I0804 09:09:59.248237 1661480 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 09:09:59.248440 1661480 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0804 09:09:59.248589 1661480 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	I0804 09:09:59.248701 1661480 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0804 09:09:59.248843 1661480 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	I0804 09:09:59.248851 1661480 kubeadm.go:310] 
	I0804 09:09:59.251561 1661480 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0804 09:09:59.251846 1661480 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0804 09:09:59.251983 1661480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 09:09:59.252295 1661480 kubeadm.go:310] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:09:59.252358 1661480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0804 09:09:59.252583 1661480 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001870906s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.652491519s
	[control-plane-check] kube-scheduler is healthy after 32.64974442s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000445769s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	I0804 09:09:59.252631 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0804 09:10:00.037426 1661480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 09:10:00.048756 1661480 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0804 09:10:00.048799 1661480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 09:10:00.056703 1661480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 09:10:00.056711 1661480 kubeadm.go:157] found existing configuration files:
	
	I0804 09:10:00.056746 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0804 09:10:00.064271 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 09:10:00.064310 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 09:10:00.071720 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0804 09:10:00.079478 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 09:10:00.079512 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 09:10:00.086675 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0804 09:10:00.094268 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 09:10:00.094310 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 09:10:00.101549 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0804 09:10:00.108748 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 09:10:00.108780 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 09:10:00.115895 1661480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0804 09:10:00.150607 1661480 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0-beta.0
	I0804 09:10:00.150679 1661480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 09:10:00.163722 1661480 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0804 09:10:00.163786 1661480 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0804 09:10:00.163846 1661480 kubeadm.go:310] OS: Linux
	I0804 09:10:00.163909 1661480 kubeadm.go:310] CGROUPS_CPU: enabled
	I0804 09:10:00.163960 1661480 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0804 09:10:00.164019 1661480 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0804 09:10:00.164060 1661480 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0804 09:10:00.164099 1661480 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0804 09:10:00.164143 1661480 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0804 09:10:00.164177 1661480 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0804 09:10:00.164213 1661480 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0804 09:10:00.164247 1661480 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0804 09:10:00.214655 1661480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 09:10:00.214804 1661480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 09:10:00.214924 1661480 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 09:10:00.225204 1661480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 09:10:00.228114 1661480 out.go:235]   - Generating certificates and keys ...
	I0804 09:10:00.228235 1661480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 09:10:00.228353 1661480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 09:10:00.228472 1661480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 09:10:00.228537 1661480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 09:10:00.228597 1661480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 09:10:00.228639 1661480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 09:10:00.228694 1661480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 09:10:00.228785 1661480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 09:10:00.228876 1661480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 09:10:00.228943 1661480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 09:10:00.228999 1661480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 09:10:00.229083 1661480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 09:10:00.330549 1661480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 09:10:00.508036 1661480 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 09:10:00.741967 1661480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 09:10:01.526835 1661480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 09:10:01.662111 1661480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 09:10:01.662652 1661480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 09:10:01.664702 1661480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 09:10:01.666272 1661480 out.go:235]   - Booting up control plane ...
	I0804 09:10:01.666353 1661480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 09:10:01.666413 1661480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 09:10:01.667084 1661480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 09:10:01.679192 1661480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 09:10:01.679268 1661480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0804 09:10:01.684800 1661480 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0804 09:10:01.685864 1661480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 09:10:01.685922 1661480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 09:10:01.773321 1661480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 09:10:01.773477 1661480 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 09:10:02.774854 1661480 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001670583s
	I0804 09:10:02.777450 1661480 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0804 09:10:02.777542 1661480 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I0804 09:10:02.777645 1661480 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0804 09:10:02.777709 1661480 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0804 09:10:06.220867 1661480 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.44333807s
	I0804 09:10:36.606673 1661480 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 33.829135405s
	I0804 09:14:02.777907 1661480 kubeadm.go:310] [control-plane-check] kube-apiserver is not healthy after 4m0.000246349s
	I0804 09:14:02.777973 1661480 kubeadm.go:310] 
	I0804 09:14:02.778102 1661480 kubeadm.go:310] A control plane component may have crashed or exited when started by the container runtime.
	I0804 09:14:02.778204 1661480 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 09:14:02.778303 1661480 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0804 09:14:02.778415 1661480 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	I0804 09:14:02.778499 1661480 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0804 09:14:02.778604 1661480 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	I0804 09:14:02.778614 1661480 kubeadm.go:310] 
	I0804 09:14:02.781964 1661480 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0804 09:14:02.782147 1661480 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0804 09:14:02.782232 1661480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 09:14:02.782512 1661480 kubeadm.go:310] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I0804 09:14:02.782622 1661480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 09:14:02.782672 1661480 kubeadm.go:394] duration metric: took 12m13.088610065s to StartCluster
	I0804 09:14:02.782740 1661480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 09:14:02.782800 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 09:14:02.821166 1661480 cri.go:89] found id: "c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e"
	I0804 09:14:02.821177 1661480 cri.go:89] found id: ""
	I0804 09:14:02.821190 1661480 logs.go:282] 1 containers: [c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e]
	I0804 09:14:02.821273 1661480 ssh_runner.go:195] Run: which crictl
	I0804 09:14:02.824824 1661480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 09:14:02.824881 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 09:14:02.861272 1661480 cri.go:89] found id: "0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1"
	I0804 09:14:02.861286 1661480 cri.go:89] found id: ""
	I0804 09:14:02.861293 1661480 logs.go:282] 1 containers: [0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1]
	I0804 09:14:02.861334 1661480 ssh_runner.go:195] Run: which crictl
	I0804 09:14:02.864640 1661480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 09:14:02.864684 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 09:14:02.896631 1661480 cri.go:89] found id: ""
	I0804 09:14:02.896648 1661480 logs.go:282] 0 containers: []
	W0804 09:14:02.896654 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:14:02.896660 1661480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 09:14:02.896720 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 09:14:02.929947 1661480 cri.go:89] found id: "ab71ff54628ca4f3cc1b1899a47413213d9243417fab01b5da5600c18c93458e"
	I0804 09:14:02.929961 1661480 cri.go:89] found id: ""
	I0804 09:14:02.929970 1661480 logs.go:282] 1 containers: [ab71ff54628ca4f3cc1b1899a47413213d9243417fab01b5da5600c18c93458e]
	I0804 09:14:02.930026 1661480 ssh_runner.go:195] Run: which crictl
	I0804 09:14:02.933377 1661480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 09:14:02.933429 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 09:14:02.966936 1661480 cri.go:89] found id: ""
	I0804 09:14:02.966951 1661480 logs.go:282] 0 containers: []
	W0804 09:14:02.966958 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:14:02.966962 1661480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 09:14:02.967020 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 09:14:02.998599 1661480 cri.go:89] found id: "19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec"
	I0804 09:14:02.998613 1661480 cri.go:89] found id: ""
	I0804 09:14:02.998622 1661480 logs.go:282] 1 containers: [19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec]
	I0804 09:14:02.998668 1661480 ssh_runner.go:195] Run: which crictl
	I0804 09:14:03.002053 1661480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 09:14:03.002114 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 09:14:03.033926 1661480 cri.go:89] found id: ""
	I0804 09:14:03.033944 1661480 logs.go:282] 0 containers: []
	W0804 09:14:03.033953 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:14:03.033973 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:14:03.033985 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:14:03.052185 1661480 logs.go:123] Gathering logs for kube-scheduler [ab71ff54628ca4f3cc1b1899a47413213d9243417fab01b5da5600c18c93458e] ...
	I0804 09:14:03.052200 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab71ff54628ca4f3cc1b1899a47413213d9243417fab01b5da5600c18c93458e"
	I0804 09:14:03.109809 1661480 logs.go:123] Gathering logs for kube-controller-manager [19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec] ...
	I0804 09:14:03.109829 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec"
	I0804 09:14:03.144087 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:14:03.144103 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:14:03.194929 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:14:03.194949 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:14:03.230465 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:14:03.230483 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:14:03.308846 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:14:03.308871 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:14:03.364644 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:14:03.357491   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.358045   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.359651   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.360110   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.361657   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:14:03.357491   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.358045   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.359651   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.360110   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.361657   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:14:03.364660 1661480 logs.go:123] Gathering logs for kube-apiserver [c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e] ...
	I0804 09:14:03.364672 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e"
	I0804 09:14:03.404334 1661480 logs.go:123] Gathering logs for etcd [0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1] ...
	I0804 09:14:03.404352 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1"
	W0804 09:14:03.438012 1661480 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001670583s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 3.44333807s
	[control-plane-check] kube-scheduler is healthy after 33.829135405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000246349s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	W0804 09:14:03.438066 1661480 out.go:270] * 
	W0804 09:14:03.438175 1661480 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001670583s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 3.44333807s
	[control-plane-check] kube-scheduler is healthy after 33.829135405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000246349s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 09:14:03.438197 1661480 out.go:270] * 
	W0804 09:14:03.440048 1661480 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 09:14:03.443944 1661480 out.go:201] 
	W0804 09:14:03.444897 1661480 out.go:270] X Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001670583s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 3.44333807s
	[control-plane-check] kube-scheduler is healthy after 33.829135405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000246349s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 09:14:03.444921 1661480 out.go:270] * 
	W0804 09:14:03.446546 1661480 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 09:14:03.447852 1661480 out.go:201] 
	
	
	==> Docker <==
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.787995733Z" level=info msg="ignoring event" container=f1bd416cdc841c08268e4a5cc39ad5a59cc0a90b637768c23bba55fc61dfe5c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.834457529Z" level=info msg="ignoring event" container=e5c110c6a30cdc8999b8b044af4d1ddbb8d18f91cb064a1ebe54d22157751829 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.885743027Z" level=info msg="ignoring event" container=e13433a1e498749e89b61d95e4e808ac592ff0f1590fa6a6796cb547fa62b353 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.942900152Z" level=info msg="ignoring event" container=0dbe96ba02a76e8c83b519e0f5e45430250b1274660db94c7535b17780b8b6a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.996443176Z" level=info msg="ignoring event" container=65a02a714ffa74a76d877f2f692a10085ec7c8de0a017440b9efab00ad27e971 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d4d4b2be5907ada8d86373ea4112563c2759616d61b4a3818a35c5e172d53a14/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c3e3744dc769f21f2dd24654e1beecb6bfea7f8fdbb934aece5c0de776222793/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b2655ec5482c692bf93620fb4f296ae1f6e6322e8ac4d9bc5b6eb4deb7959758/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3a21deea3bd6d0ed2e1f870c1f36ae32ec63d20d02b5d6f7c0acfdbaa8f8b941/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 dockerd[11071]: time="2025-08-04T09:10:03.575810667Z" level=info msg="ignoring event" container=b425fd9606261cc933d38c743338a7166df00b74150ec90a06efaa88ed8fc7b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:04 functional-699837 dockerd[11071]: time="2025-08-04T09:10:04.004048987Z" level=info msg="ignoring event" container=6405868ef96be39062f80dc7747b60785a54bddc511237239054e6857dfb60f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:25 functional-699837 dockerd[11071]: time="2025-08-04T09:10:25.604145123Z" level=info msg="ignoring event" container=fa805a11775898f3d55fe7aac1621ef34f65e4c5d265b91d14f1aac398eb73e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:25 functional-699837 dockerd[11071]: time="2025-08-04T09:10:25.760949608Z" level=info msg="ignoring event" container=f96509d0b4a5c44670e00704a788094c91d7b771e339e28bcbb4c72c5b3337f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:46 functional-699837 dockerd[11071]: time="2025-08-04T09:10:46.592786531Z" level=info msg="ignoring event" container=f4baa19e4e176c92972f5c522b74a59ccb787659ec18793a2507e5f3eb51c18e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:47 functional-699837 dockerd[11071]: time="2025-08-04T09:10:47.616507681Z" level=info msg="ignoring event" container=25c1c03e2a156d302903662e106257ad86e1a932fc60405f41533a9012305264 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:55 functional-699837 dockerd[11071]: time="2025-08-04T09:10:55.761109664Z" level=info msg="ignoring event" container=c26a4a47aeb6e114017bda7b18b81d29e691be9cb646b2d0563767522b4243e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:59 functional-699837 dockerd[11071]: time="2025-08-04T09:10:59.048340949Z" level=info msg="ignoring event" container=5782c2a66cdd131809b7afdb2a669ecdc6104e397476ab6668c189dd853d9135 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:23 functional-699837 dockerd[11071]: time="2025-08-04T09:11:23.680443620Z" level=info msg="ignoring event" container=8b79556a690891c36a658f03ea970153fdb49c95eddd24f9241c3648decbc9ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:34 functional-699837 dockerd[11071]: time="2025-08-04T09:11:34.704315507Z" level=info msg="ignoring event" container=b2c8622eb896520d559e06ff8656f4690c8183e99d4c298a76889fb2e1f0ebf7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:41 functional-699837 dockerd[11071]: time="2025-08-04T09:11:41.762186466Z" level=info msg="ignoring event" container=bc29e58366f3b736cc21b6d0cc45970040b105936cf9045300d75e3e3fc5a723 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:12:15 functional-699837 dockerd[11071]: time="2025-08-04T09:12:15.453114207Z" level=info msg="ignoring event" container=9fa5f5eeba93beb44bb9b23ec48553aaea94d0f30b5d2c53f2f15b77b1d7977c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:12:26 functional-699837 dockerd[11071]: time="2025-08-04T09:12:26.472269528Z" level=info msg="ignoring event" container=91a0d13be39f38898491d381b24367c6e8aed57bbdcaf093ac956972d4c853ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:07 functional-699837 dockerd[11071]: time="2025-08-04T09:13:07.763715484Z" level=info msg="ignoring event" container=0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:21 functional-699837 dockerd[11071]: time="2025-08-04T09:13:21.094277794Z" level=info msg="ignoring event" container=c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:29 functional-699837 dockerd[11071]: time="2025-08-04T09:13:29.764267638Z" level=info msg="ignoring event" container=19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	19b815a4b1b28       9ad783615e1bc       56 seconds ago       Exited              kube-controller-manager   4                   b2655ec5482c6       kube-controller-manager-functional-699837
	0e5a036fd8651       1e30c0b1e9b99       57 seconds ago       Exited              etcd                      5                   d4d4b2be5907a       etcd-functional-699837
	c9537e09fe59d       d85eea91cc41d       About a minute ago   Exited              kube-apiserver            4                   c3e3744dc769f       kube-apiserver-functional-699837
	ab71ff54628ca       21d34a2aeacf5       4 minutes ago        Running             kube-scheduler            0                   3a21deea3bd6d       kube-scheduler-functional-699837
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:14:04.331074   25085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:04.331581   25085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:04.333104   25085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:04.333543   25085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:04.335095   25085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000488] IPv4: martian source 10.244.0.33 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[  +0.000590] IPv4: martian source 10.244.0.33 from 10.244.0.7, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ee 17 d6 72 58 d4 08 06
	[ +20.425373] IPv4: martian source 10.244.0.36 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 2e 04 ae c5 a3 08 06
	[  +0.708699] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[Aug 4 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 4d a6 d6 4c 9f 08 06
	[Aug 4 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 38 7f 58 31 63 08 06
	[ +30.193533] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 b7 61 9c 47 84 08 06
	[Aug 4 08:45] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a d0 26 e8 7c d1 08 06
	[Aug 4 08:46] FS-Cache: Duplicate cookie detected
	[  +0.004807] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006832] FS-Cache: O-cookie d=000000003739c6e4{9P.session} n=000000001b482ea5
	[  +0.007607] FS-Cache: O-key=[10] '34333332323039333239'
	[  +0.005436] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006682] FS-Cache: N-cookie d=000000003739c6e4{9P.session} n=00000000e0b3994b
	[  +0.007609] FS-Cache: N-key=[10] '34333332323039333239'
	[  +5.882110] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 55 4a ac 47 cd 08 06
	
	
	==> etcd [0e5a036fd865] <==
	flag provided but not defined: -proxy-refresh-interval
	Usage:
	
	  etcd [flags]
	    Start an etcd server.
	
	  etcd --version
	    Show the version of etcd.
	
	  etcd -h | --help
	    Show the help information about etcd.
	
	  etcd --config-file
	    Path to the server configuration file. Note that if a configuration file is provided, other command line flags and environment variables will be ignored.
	
	  etcd gateway
	    Run the stateless pass-through etcd TCP connection forwarding proxy.
	
	  etcd grpc-proxy
	    Run the stateless etcd v3 gRPC L7 reverse proxy.
	
	
	
	==> kernel <==
	 09:14:04 up 1 day, 17:55,  0 users,  load average: 0.03, 0.09, 0.23
	Linux functional-699837 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [c9537e09fe59] <==
	W0804 09:13:01.062968       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0804 09:13:01.063080       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 09:13:01.064364       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 09:13:01.072243       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 09:13:01.077057       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceAutoProvision,NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 09:13:01.077076       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 09:13:01.077355       1 instance.go:232] Using reconciler: lease
	W0804 09:13:01.078152       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0804 09:13:01.078183       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.064385       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.064386       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.079065       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.556302       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.764969       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.836811       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:05.764628       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:06.271423       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:06.558313       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:09.120366       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:10.991226       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:11.100603       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:15.082522       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:16.616538       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:18.138507       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 09:13:21.078676       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [19b815a4b1b2] <==
	I0804 09:13:09.096379       1 serving.go:386] Generated self-signed cert in-memory
	I0804 09:13:09.725784       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 09:13:09.725823       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 09:13:09.727763       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 09:13:09.727831       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 09:13:09.728078       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 09:13:09.728188       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 09:13:29.730720       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-scheduler [ab71ff54628c] <==
	E0804 09:12:58.837427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:13:11.033725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 09:13:11.088932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:13:15.121795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:13:17.161677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:13:18.600381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43972->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:44002->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43978->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43960->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:13:22.083981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:44032->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:13:22.084172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43986->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:13:22.585066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 09:13:26.210416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 09:13:27.295821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 09:13:34.688522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 09:13:37.031049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:13:45.713447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:13:49.362723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 09:13:54.296326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:13:55.421665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:13:56.863265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:13:57.488174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:13:59.236047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:14:03.694972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	
	
	==> kubelet <==
	Aug 04 09:13:43 functional-699837 kubelet[23032]: E0804 09:13:43.644360   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-functional-699837_kube-system(33b890b5c0b95f8eaa124c566a17ff33)\"" pod="kube-system/etcd-functional-699837" podUID="33b890b5c0b95f8eaa124c566a17ff33"
	Aug 04 09:13:44 functional-699837 kubelet[23032]: E0804 09:13:44.607059   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:13:45 functional-699837 kubelet[23032]: E0804 09:13:45.348371   23032 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588548cf9cd04c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,LastTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:13:48 functional-699837 kubelet[23032]: E0804 09:13:48.644166   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:13:48 functional-699837 kubelet[23032]: I0804 09:13:48.644257   23032 scope.go:117] "RemoveContainer" containerID="19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec"
	Aug 04 09:13:48 functional-699837 kubelet[23032]: E0804 09:13:48.644416   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-699837_kube-system(ed0b2fd0bf6ad62500e8494ab79d1a1a)\"" pod="kube-system/kube-controller-manager-functional-699837" podUID="ed0b2fd0bf6ad62500e8494ab79d1a1a"
	Aug 04 09:13:50 functional-699837 kubelet[23032]: I0804 09:13:50.631932   23032 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:13:50 functional-699837 kubelet[23032]: E0804 09:13:50.632326   23032 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:13:51 functional-699837 kubelet[23032]: E0804 09:13:51.608055   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:13:52 functional-699837 kubelet[23032]: E0804 09:13:52.692421   23032 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	Aug 04 09:13:55 functional-699837 kubelet[23032]: E0804 09:13:55.349669   23032 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588548cf9cd04c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,LastTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:13:55 functional-699837 kubelet[23032]: E0804 09:13:55.643765   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:13:55 functional-699837 kubelet[23032]: I0804 09:13:55.643863   23032 scope.go:117] "RemoveContainer" containerID="c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e"
	Aug 04 09:13:55 functional-699837 kubelet[23032]: E0804 09:13:55.644036   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-699837_kube-system(cc94200f18453b93e8d420d475923a00)\"" pod="kube-system/kube-apiserver-functional-699837" podUID="cc94200f18453b93e8d420d475923a00"
	Aug 04 09:13:57 functional-699837 kubelet[23032]: I0804 09:13:57.633505   23032 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:13:57 functional-699837 kubelet[23032]: E0804 09:13:57.633818   23032 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:13:57 functional-699837 kubelet[23032]: E0804 09:13:57.643831   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:13:57 functional-699837 kubelet[23032]: I0804 09:13:57.643903   23032 scope.go:117] "RemoveContainer" containerID="0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1"
	Aug 04 09:13:57 functional-699837 kubelet[23032]: E0804 09:13:57.644026   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-functional-699837_kube-system(33b890b5c0b95f8eaa124c566a17ff33)\"" pod="kube-system/etcd-functional-699837" podUID="33b890b5c0b95f8eaa124c566a17ff33"
	Aug 04 09:13:58 functional-699837 kubelet[23032]: E0804 09:13:58.609095   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:13:59 functional-699837 kubelet[23032]: E0804 09:13:59.432444   23032 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Aug 04 09:14:02 functional-699837 kubelet[23032]: E0804 09:14:02.693365   23032 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: E0804 09:14:03.644142   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: I0804 09:14:03.644222   23032 scope.go:117] "RemoveContainer" containerID="19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: E0804 09:14:03.644365   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-699837_kube-system(ed0b2fd0bf6ad62500e8494ab79d1a1a)\"" pod="kube-system/kube-controller-manager-functional-699837" podUID="ed0b2fd0bf6ad62500e8494ab79d1a1a"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837: exit status 2 (268.689163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-699837" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/ExtraConfig (742.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/ComponentHealth (1.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-699837 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:827: (dbg) Non-zero exit: kubectl --context functional-699837 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (46.762731ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:829: failed to get components. args "kubectl --context functional-699837 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-699837
helpers_test.go:235: (dbg) docker inspect functional-699837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	        "Created": "2025-08-04T08:46:45.45274172Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1645232,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T08:46:45.480784715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hosts",
	        "LogPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef-json.log",
	        "Name": "/functional-699837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-699837:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-699837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	                "LowerDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/merged",
	                "UpperDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/diff",
	                "WorkDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-699837",
	                "Source": "/var/lib/docker/volumes/functional-699837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-699837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-699837",
	                "name.minikube.sigs.k8s.io": "functional-699837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "28a81d3856c88da8c1d30d5c1cccd74ba2a899c3397b78caf0ac9da484142038",
	            "SandboxKey": "/var/run/docker/netns/28a81d3856c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-699837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:c5:9a:18:f2:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "763070d9e7bba0803db69bf71eb608d56921d0bfd4c71a1d39d0701f7372b87c",
	                    "EndpointID": "83493e8c17b59326d8c479c2c0d7a5ded2cae3362a881c1ce8347b3f751ead15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-699837",
	                        "c369b96e23d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837: exit status 2 (266.981888ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 logs -n 25
helpers_test.go:252: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-114794 image ls --format yaml --alsologtostderr                                                                                          │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ ssh     │ functional-114794 ssh pgrep buildkitd                                                                                                               │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ image   │ functional-114794 image build -t localhost/my-image:functional-114794 testdata/build --alsologtostderr                                              │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image   │ functional-114794 image ls --format json --alsologtostderr                                                                                          │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image   │ functional-114794 image ls --format table --alsologtostderr                                                                                         │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ image   │ functional-114794 image ls                                                                                                                          │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ delete  │ -p functional-114794                                                                                                                                │ functional-114794 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │ 04 Aug 25 08:46 UTC │
	│ start   │ -p functional-699837 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 08:46 UTC │                     │
	│ start   │ -p functional-699837 --alsologtostderr -v=8                                                                                                         │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 08:55 UTC │                     │
	│ cache   │ functional-699837 cache add registry.k8s.io/pause:3.1                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ functional-699837 cache add registry.k8s.io/pause:3.3                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ functional-699837 cache add registry.k8s.io/pause:latest                                                                                            │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ functional-699837 cache add minikube-local-cache-test:functional-699837                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ functional-699837 cache delete minikube-local-cache-test:functional-699837                                                                          │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                    │ minikube          │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ list                                                                                                                                                │ minikube          │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ ssh     │ functional-699837 ssh sudo crictl images                                                                                                            │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ ssh     │ functional-699837 ssh sudo docker rmi registry.k8s.io/pause:latest                                                                                  │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ ssh     │ functional-699837 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │                     │
	│ cache   │ functional-699837 cache reload                                                                                                                      │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ ssh     │ functional-699837 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                    │ minikube          │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                 │ minikube          │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │ 04 Aug 25 09:01 UTC │
	│ kubectl │ functional-699837 kubectl -- --context functional-699837 get pods                                                                                   │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │                     │
	│ start   │ -p functional-699837 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                            │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 09:01:42
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 09:01:42.156481 1661480 out.go:345] Setting OutFile to fd 1 ...
	I0804 09:01:42.156707 1661480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:01:42.156710 1661480 out.go:358] Setting ErrFile to fd 2...
	I0804 09:01:42.156714 1661480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:01:42.156897 1661480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 09:01:42.157507 1661480 out.go:352] Setting JSON to false
	I0804 09:01:42.158437 1661480 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":150191,"bootTime":1754147911,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 09:01:42.158562 1661480 start.go:140] virtualization: kvm guest
	I0804 09:01:42.160356 1661480 out.go:177] * [functional-699837] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 09:01:42.161427 1661480 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 09:01:42.161472 1661480 notify.go:220] Checking for updates...
	I0804 09:01:42.163278 1661480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 09:01:42.164206 1661480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 09:01:42.165120 1661480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 09:01:42.165996 1661480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 09:01:42.166919 1661480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 09:01:42.168183 1661480 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:01:42.168274 1661480 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 09:01:42.191254 1661480 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 09:01:42.191357 1661480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:01:42.241393 1661480 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:56 SystemTime:2025-08-04 09:01:42.232515248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:01:42.241500 1661480 docker.go:318] overlay module found
	I0804 09:01:42.242889 1661480 out.go:177] * Using the docker driver based on existing profile
	I0804 09:01:42.244074 1661480 start.go:304] selected driver: docker
	I0804 09:01:42.244080 1661480 start.go:918] validating driver "docker" against &{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:01:42.244146 1661480 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 09:01:42.244220 1661480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:01:42.294650 1661480 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:56 SystemTime:2025-08-04 09:01:42.286637693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:01:42.295228 1661480 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 09:01:42.295248 1661480 cni.go:84] Creating CNI manager for ""
	I0804 09:01:42.295307 1661480 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:01:42.295353 1661480 start.go:348] cluster config:
	{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:01:42.296893 1661480 out.go:177] * Starting "functional-699837" primary control-plane node in "functional-699837" cluster
	I0804 09:01:42.297909 1661480 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 09:01:42.298895 1661480 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 09:01:42.299795 1661480 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 09:01:42.299827 1661480 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 09:01:42.299834 1661480 cache.go:56] Caching tarball of preloaded images
	I0804 09:01:42.299892 1661480 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 09:01:42.299912 1661480 preload.go:172] Found /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 09:01:42.299918 1661480 cache.go:59] Finished verifying existence of preloaded tar for v1.34.0-beta.0 on docker
	I0804 09:01:42.300000 1661480 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/config.json ...
	I0804 09:01:42.318895 1661480 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 09:01:42.318906 1661480 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 09:01:42.318921 1661480 cache.go:230] Successfully downloaded all kic artifacts
	I0804 09:01:42.318949 1661480 start.go:360] acquireMachinesLock for functional-699837: {Name:mkeddb8e244284f14cfc07327f464823de65cf67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 09:01:42.319013 1661480 start.go:364] duration metric: took 47.797µs to acquireMachinesLock for "functional-699837"
	I0804 09:01:42.319031 1661480 start.go:96] Skipping create...Using existing machine configuration
	I0804 09:01:42.319035 1661480 fix.go:54] fixHost starting: 
	I0804 09:01:42.319241 1661480 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 09:01:42.335260 1661480 fix.go:112] recreateIfNeeded on functional-699837: state=Running err=<nil>
	W0804 09:01:42.335277 1661480 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 09:01:42.336775 1661480 out.go:177] * Updating the running docker "functional-699837" container ...
	I0804 09:01:42.337763 1661480 machine.go:93] provisionDockerMachine start ...
	I0804 09:01:42.337866 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:42.354303 1661480 main.go:141] libmachine: Using SSH client type: native
	I0804 09:01:42.354606 1661480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 09:01:42.354616 1661480 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 09:01:42.480475 1661480 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-699837
	
	I0804 09:01:42.480497 1661480 ubuntu.go:169] provisioning hostname "functional-699837"
	I0804 09:01:42.480554 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:42.497934 1661480 main.go:141] libmachine: Using SSH client type: native
	I0804 09:01:42.498143 1661480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 09:01:42.498149 1661480 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-699837 && echo "functional-699837" | sudo tee /etc/hostname
	I0804 09:01:42.631472 1661480 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-699837
	
	I0804 09:01:42.631543 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:42.651771 1661480 main.go:141] libmachine: Using SSH client type: native
	I0804 09:01:42.651968 1661480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 09:01:42.651979 1661480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-699837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-699837/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-699837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 09:01:42.773172 1661480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 09:01:42.773193 1661480 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 09:01:42.773212 1661480 ubuntu.go:177] setting up certificates
	I0804 09:01:42.773223 1661480 provision.go:84] configureAuth start
	I0804 09:01:42.773312 1661480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-699837
	I0804 09:01:42.791415 1661480 provision.go:143] copyHostCerts
	I0804 09:01:42.791465 1661480 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 09:01:42.791472 1661480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 09:01:42.791531 1661480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 09:01:42.791616 1661480 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 09:01:42.791620 1661480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 09:01:42.791646 1661480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 09:01:42.791714 1661480 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 09:01:42.791716 1661480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 09:01:42.791734 1661480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 09:01:42.791789 1661480 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.functional-699837 san=[127.0.0.1 192.168.49.2 functional-699837 localhost minikube]
	I0804 09:01:43.143340 1661480 provision.go:177] copyRemoteCerts
	I0804 09:01:43.143389 1661480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 09:01:43.143445 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:43.161220 1661480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 09:01:43.249861 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 09:01:43.271347 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 09:01:43.292377 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 09:01:43.313416 1661480 provision.go:87] duration metric: took 540.180755ms to configureAuth
	I0804 09:01:43.313435 1661480 ubuntu.go:193] setting minikube options for container-runtime
	I0804 09:01:43.313593 1661480 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:01:43.313633 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:43.330273 1661480 main.go:141] libmachine: Using SSH client type: native
	I0804 09:01:43.330483 1661480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 09:01:43.330489 1661480 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 09:01:43.457453 1661480 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 09:01:43.457467 1661480 ubuntu.go:71] root file system type: overlay
	I0804 09:01:43.457576 1661480 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 09:01:43.457634 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:43.474934 1661480 main.go:141] libmachine: Using SSH client type: native
	I0804 09:01:43.475149 1661480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 09:01:43.475211 1661480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 09:01:43.609712 1661480 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 09:01:43.609798 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:43.627690 1661480 main.go:141] libmachine: Using SSH client type: native
	I0804 09:01:43.627960 1661480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 09:01:43.627979 1661480 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 09:01:43.753925 1661480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 09:01:43.753943 1661480 machine.go:96] duration metric: took 1.416170869s to provisionDockerMachine
	I0804 09:01:43.753958 1661480 start.go:293] postStartSetup for "functional-699837" (driver="docker")
	I0804 09:01:43.753972 1661480 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 09:01:43.754026 1661480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 09:01:43.754070 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:43.771133 1661480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 09:01:43.861861 1661480 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 09:01:43.864855 1661480 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 09:01:43.864888 1661480 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 09:01:43.864895 1661480 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 09:01:43.864901 1661480 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 09:01:43.864911 1661480 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 09:01:43.864956 1661480 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 09:01:43.865026 1661480 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 09:01:43.865096 1661480 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts -> hosts in /etc/test/nested/copy/1582690
	I0804 09:01:43.865126 1661480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1582690
	I0804 09:01:43.872832 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 09:01:43.894143 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts --> /etc/test/nested/copy/1582690/hosts (40 bytes)
	I0804 09:01:43.915287 1661480 start.go:296] duration metric: took 161.311477ms for postStartSetup
	I0804 09:01:43.915357 1661480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 09:01:43.915392 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:43.932959 1661480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 09:01:44.018261 1661480 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 09:01:44.022893 1661480 fix.go:56] duration metric: took 1.703852119s for fixHost
	I0804 09:01:44.022909 1661480 start.go:83] releasing machines lock for "functional-699837", held for 1.703889075s
	I0804 09:01:44.022981 1661480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-699837
	I0804 09:01:44.039826 1661480 ssh_runner.go:195] Run: cat /version.json
	I0804 09:01:44.039861 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:44.039893 1661480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 09:01:44.039958 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:44.056968 1661480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 09:01:44.057018 1661480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 09:01:44.215860 1661480 ssh_runner.go:195] Run: systemctl --version
	I0804 09:01:44.220163 1661480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 09:01:44.224284 1661480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 09:01:44.241133 1661480 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 09:01:44.241191 1661480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 09:01:44.249056 1661480 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 09:01:44.249074 1661480 start.go:495] detecting cgroup driver to use...
	I0804 09:01:44.249111 1661480 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 09:01:44.249262 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 09:01:44.263581 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:44.682033 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 09:01:44.691892 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 09:01:44.700781 1661480 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 09:01:44.700830 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 09:01:44.709728 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 09:01:44.718687 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 09:01:44.727121 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 09:01:44.735358 1661480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 09:01:44.743204 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 09:01:44.751683 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 09:01:44.760146 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 09:01:44.768590 1661480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 09:01:44.775769 1661480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 09:01:44.782939 1661480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:01:44.861305 1661480 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 09:01:45.079189 1661480 start.go:495] detecting cgroup driver to use...
	I0804 09:01:45.079234 1661480 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 09:01:45.079293 1661480 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 09:01:45.091099 1661480 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 09:01:45.091152 1661480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 09:01:45.102759 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 09:01:45.118200 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:45.531236 1661480 ssh_runner.go:195] Run: which cri-dockerd
	I0804 09:01:45.535092 1661480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 09:01:45.543037 1661480 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 09:01:45.558759 1661480 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 09:01:45.636615 1661480 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 09:01:45.710742 1661480 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 09:01:45.710843 1661480 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 09:01:45.726627 1661480 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 09:01:45.735943 1661480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:01:45.815264 1661480 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 09:01:46.120565 1661480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 09:01:46.133038 1661480 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0804 09:01:46.150796 1661480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 09:01:46.160527 1661480 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 09:01:46.221390 1661480 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 09:01:46.295075 1661480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:01:46.370922 1661480 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 09:01:46.383433 1661480 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 09:01:46.393933 1661480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:01:46.488903 1661480 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 09:01:46.549986 1661480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 09:01:46.560540 1661480 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 09:01:46.560600 1661480 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 09:01:46.563751 1661480 start.go:563] Will wait 60s for crictl version
	I0804 09:01:46.563795 1661480 ssh_runner.go:195] Run: which crictl
	I0804 09:01:46.566758 1661480 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 09:01:46.597980 1661480 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 09:01:46.598027 1661480 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 09:01:46.620697 1661480 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 09:01:46.645762 1661480 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 09:01:46.645842 1661480 cli_runner.go:164] Run: docker network inspect functional-699837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 09:01:46.662809 1661480 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0804 09:01:46.668020 1661480 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0804 09:01:46.668935 1661480 kubeadm.go:875] updating cluster {Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 09:01:46.669097 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:47.081840 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:47.467578 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:47.872001 1661480 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 09:01:47.872135 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:48.275938 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:48.676410 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:49.085653 1661480 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 09:01:49.106101 1661480 docker.go:703] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-699837
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0804 09:01:49.106124 1661480 docker.go:633] Images already preloaded, skipping extraction
	I0804 09:01:49.106192 1661480 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 09:01:49.124259 1661480 docker.go:703] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-699837
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0804 09:01:49.124275 1661480 cache_images.go:85] Images are preloaded, skipping loading
	I0804 09:01:49.124286 1661480 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0-beta.0 docker true true} ...
	I0804 09:01:49.124427 1661480 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-699837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 09:01:49.124491 1661480 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 09:01:49.170617 1661480 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0804 09:01:49.170646 1661480 cni.go:84] Creating CNI manager for ""
	I0804 09:01:49.170660 1661480 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:01:49.170668 1661480 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 09:01:49.170688 1661480 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-699837 NodeName:functional-699837 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 09:01:49.170805 1661480 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-699837"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 09:01:49.170853 1661480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 09:01:49.178893 1661480 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 09:01:49.178936 1661480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 09:01:49.186387 1661480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0804 09:01:49.201786 1661480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 09:01:49.217510 1661480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0804 09:01:49.233089 1661480 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0804 09:01:49.236403 1661480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:01:49.323526 1661480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 09:01:49.333766 1661480 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837 for IP: 192.168.49.2
	I0804 09:01:49.333778 1661480 certs.go:194] generating shared ca certs ...
	I0804 09:01:49.333793 1661480 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:01:49.333937 1661480 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 09:01:49.333980 1661480 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 09:01:49.333986 1661480 certs.go:256] generating profile certs ...
	I0804 09:01:49.334070 1661480 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.key
	I0804 09:01:49.334108 1661480 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key.5971bdc2
	I0804 09:01:49.334140 1661480 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key
	I0804 09:01:49.334230 1661480 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 09:01:49.334251 1661480 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 09:01:49.334257 1661480 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 09:01:49.334275 1661480 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 09:01:49.334296 1661480 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 09:01:49.334317 1661480 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 09:01:49.334351 1661480 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 09:01:49.334909 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 09:01:49.355952 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 09:01:49.376603 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 09:01:49.397019 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 09:01:49.417530 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 09:01:49.437950 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 09:01:49.457994 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 09:01:49.478390 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 09:01:49.498988 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 09:01:49.519691 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 09:01:49.540289 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 09:01:49.560954 1661480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 09:01:49.576254 1661480 ssh_runner.go:195] Run: openssl version
	I0804 09:01:49.581261 1661480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 09:01:49.589514 1661480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:01:49.592478 1661480 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:01:49.592512 1661480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:01:49.598570 1661480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 09:01:49.606091 1661480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 09:01:49.613958 1661480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 09:01:49.616884 1661480 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 09:01:49.616913 1661480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 09:01:49.622974 1661480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 09:01:49.630466 1661480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 09:01:49.638717 1661480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 09:01:49.641763 1661480 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 09:01:49.641800 1661480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 09:01:49.648809 1661480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 09:01:49.656437 1661480 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 09:01:49.659644 1661480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 09:01:49.665529 1661480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 09:01:49.671334 1661480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 09:01:49.677030 1661480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 09:01:49.682628 1661480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 09:01:49.688419 1661480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 09:01:49.694068 1661480 kubeadm.go:392] StartCluster: {Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:01:49.694169 1661480 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 09:01:49.711391 1661480 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 09:01:49.719062 1661480 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 09:01:49.719070 1661480 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0804 09:01:49.719111 1661480 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 09:01:49.726688 1661480 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 09:01:49.727133 1661480 kubeconfig.go:125] found "functional-699837" server: "https://192.168.49.2:8441"
	I0804 09:01:49.728393 1661480 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 09:01:49.735849 1661480 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-08-04 08:47:09.659345836 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-08-04 09:01:49.228640689 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I0804 09:01:49.735860 1661480 kubeadm.go:1152] stopping kube-system containers ...
	I0804 09:01:49.735896 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 09:01:49.755611 1661480 docker.go:496] Stopping containers: [54bef897d3ad 5e988e8b274a 16527e0d8c26 14c7dc479dba 243f1d3d8950 2fafac7520c8 a70a68ec6169 340fbe431c80 3206d43d6e58 6196286ba923 87c98d51b11a 4dc39892c792 a670d9d90ef4 0cb03d71b984 cdae8372eae9]
	I0804 09:01:49.755668 1661480 ssh_runner.go:195] Run: docker stop 54bef897d3ad 5e988e8b274a 16527e0d8c26 14c7dc479dba 243f1d3d8950 2fafac7520c8 a70a68ec6169 340fbe431c80 3206d43d6e58 6196286ba923 87c98d51b11a 4dc39892c792 a670d9d90ef4 0cb03d71b984 cdae8372eae9
	I0804 09:01:49.833087 1661480 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 09:01:49.988574 1661480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 09:01:49.996961 1661480 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Aug  4 08:51 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5628 Aug  4 08:51 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Aug  4 08:51 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Aug  4 08:51 /etc/kubernetes/scheduler.conf
	
	I0804 09:01:49.996998 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0804 09:01:50.004698 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0804 09:01:50.012067 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0804 09:01:50.012114 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 09:01:50.019467 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0804 09:01:50.027050 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0804 09:01:50.027082 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 09:01:50.034408 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0804 09:01:50.041768 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0804 09:01:50.041795 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 09:01:50.049038 1661480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 09:01:50.056613 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:01:50.095874 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:01:52.185164 1661480 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.089256416s)
	I0804 09:01:52.185190 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:01:52.321482 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:01:52.369615 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:01:52.486402 1661480 api_server.go:52] waiting for apiserver process to appear ...
	I0804 09:01:52.486480 1661480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:01:52.986660 1661480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:01:53.487520 1661480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:01:53.499325 1661480 api_server.go:72] duration metric: took 1.012937004s to wait for apiserver process to appear ...
	I0804 09:01:53.499341 1661480 api_server.go:88] waiting for apiserver healthz status ...
	I0804 09:01:53.499366 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:01:58.500087 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:01:58.500130 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:03.500427 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:03.500461 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:08.502025 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:08.502061 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:13.503279 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:13.503317 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:14.779567 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": read tcp 192.168.49.1:33220->192.168.49.2:8441: read: connection reset by peer
	I0804 09:02:14.779627 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:14.780024 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:15.000448 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:15.000951 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:15.499579 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:15.499998 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:15.999661 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:21.000340 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:21.000373 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:26.001332 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:26.001368 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:31.002000 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:31.002033 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:36.005328 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:36.005357 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:36.551344 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": read tcp 192.168.49.1:35998->192.168.49.2:8441: read: connection reset by peer
	I0804 09:02:36.551397 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:36.551841 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:36.999411 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:36.999848 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:37.500408 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:37.500946 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:37.999558 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:37.999957 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:38.499584 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:38.500029 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:38.999644 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:39.000099 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:39.499738 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:39.500213 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:39.999937 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:40.000357 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:40.500064 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:40.500521 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:40.999940 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:41.000330 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:41.500057 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:41.500511 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:42.000224 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:42.000633 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:42.500342 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:42.500765 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:43.000455 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:43.000936 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:43.499548 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:43.499961 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:43.999579 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:43.999966 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:44.499598 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:44.500010 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:44.999630 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:45.000087 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:45.499708 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:45.500143 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:45.999756 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:46.000186 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:46.499807 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:46.500248 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:46.999865 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:47.000330 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:47.500068 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:47.500472 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:48.000163 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:48.000618 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:48.500337 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:48.500730 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:49.000434 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:49.000869 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:49.499503 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:49.499937 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:49.999501 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:49.999940 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:50.499602 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:50.500057 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:50.999688 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:51.000139 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:51.499774 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:51.500227 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:51.999865 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:52.000295 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:52.500025 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:52.500526 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:53.000242 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:53.000634 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:53.500441 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:02:53.519729 1661480 logs.go:282] 2 containers: [535dc83f2f73 a70a68ec6169]
	I0804 09:02:53.519801 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:02:53.538762 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:02:53.538813 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:02:53.556054 1661480 logs.go:282] 0 containers: []
	W0804 09:02:53.556070 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:02:53.556116 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:02:53.573504 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:02:53.573556 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:02:53.590727 1661480 logs.go:282] 0 containers: []
	W0804 09:02:53.590742 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:02:53.590784 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:02:53.608494 1661480 logs.go:282] 1 containers: [0bd5610c8547]
	I0804 09:02:53.608550 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:02:53.625413 1661480 logs.go:282] 0 containers: []
	W0804 09:02:53.625424 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:02:53.625435 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:02:53.625443 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:02:53.665235 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:02:53.665279 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:02:53.683621 1661480 logs.go:123] Gathering logs for kube-apiserver [535dc83f2f73] ...
	I0804 09:02:53.683636 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 535dc83f2f73"
	I0804 09:02:53.708748 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:02:53.708766 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:02:53.729347 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:02:53.729362 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:02:53.770407 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:02:53.770428 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:02:53.852664 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:02:53.852687 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:02:53.907229 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:02:53.900372   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.900835   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.902406   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.902856   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.904351   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:02:53.900372   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.900835   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.902406   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.902856   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.904351   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:02:53.907253 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:02:53.907266 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:02:53.932272 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:02:53.932289 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:02:53.966223 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:02:53.966245 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:02:54.018841 1661480 logs.go:123] Gathering logs for kube-controller-manager [0bd5610c8547] ...
	I0804 09:02:54.018859 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd5610c8547"
	I0804 09:02:56.541137 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:56.541605 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:56.541686 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:02:56.560651 1661480 logs.go:282] 2 containers: [535dc83f2f73 a70a68ec6169]
	I0804 09:02:56.560710 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:02:56.578753 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:02:56.578815 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:02:56.596005 1661480 logs.go:282] 0 containers: []
	W0804 09:02:56.596019 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:02:56.596059 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:02:56.613187 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:02:56.613269 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:02:56.629991 1661480 logs.go:282] 0 containers: []
	W0804 09:02:56.630005 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:02:56.630051 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:02:56.647935 1661480 logs.go:282] 1 containers: [0bd5610c8547]
	I0804 09:02:56.648000 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:02:56.665663 1661480 logs.go:282] 0 containers: []
	W0804 09:02:56.665677 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:02:56.665686 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:02:56.665696 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:02:56.703183 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:02:56.703200 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:02:56.757823 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:02:56.750851   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.751407   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.752950   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.753405   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.754929   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:02:56.750851   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.751407   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.752950   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.753405   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.754929   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:02:56.757834 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:02:56.757846 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:02:56.793009 1661480 logs.go:123] Gathering logs for kube-controller-manager [0bd5610c8547] ...
	I0804 09:02:56.793031 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd5610c8547"
	I0804 09:02:56.814543 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:02:56.814560 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:02:56.858353 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:02:56.858374 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:02:56.938490 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:02:56.938512 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:02:56.957429 1661480 logs.go:123] Gathering logs for kube-apiserver [535dc83f2f73] ...
	I0804 09:02:56.957445 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 535dc83f2f73"
	I0804 09:02:56.982565 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:02:56.982582 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:02:57.007749 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:02:57.007767 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:02:57.027909 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:02:57.027926 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:02:59.582075 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:04.583858 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:03:04.583974 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:04.603429 1661480 logs.go:282] 3 containers: [a20e277f239a 535dc83f2f73 a70a68ec6169]
	I0804 09:03:04.603486 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:04.621192 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:03:04.621271 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:04.638764 1661480 logs.go:282] 0 containers: []
	W0804 09:03:04.638780 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:04.638831 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:04.656957 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:04.657045 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:04.673865 1661480 logs.go:282] 0 containers: []
	W0804 09:03:04.673881 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:04.673937 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:04.691557 1661480 logs.go:282] 1 containers: [0bd5610c8547]
	I0804 09:03:04.691645 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:04.709384 1661480 logs.go:282] 0 containers: []
	W0804 09:03:04.709397 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:04.709412 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:04.709425 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:04.728509 1661480 logs.go:123] Gathering logs for kube-apiserver [535dc83f2f73] ...
	I0804 09:03:04.728525 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 535dc83f2f73"
	I0804 09:03:04.753446 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:03:04.753464 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:03:04.772841 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:04.772865 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 09:03:19.398944 1661480 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (14.626059536s)
	W0804 09:03:19.398974 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:14.821564   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:03:19.391583   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:42134->[::1]:8441: read: connection reset by peer"
	E0804 09:03:19.392195   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:19.393996   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:19.394458   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:14.821564   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:03:19.391583   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:42134->[::1]:8441: read: connection reset by peer"
	E0804 09:03:19.392195   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:19.393996   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:19.394458   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:19.398986 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:19.398996 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:19.427211 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:19.427230 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:19.452181 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:19.452199 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:19.488740 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:19.488758 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:19.543335 1661480 logs.go:123] Gathering logs for kube-controller-manager [0bd5610c8547] ...
	I0804 09:03:19.543361 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd5610c8547"
	I0804 09:03:19.564213 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:19.564229 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:19.604899 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:19.604921 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:19.642424 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:19.642448 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:22.221477 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:22.222040 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:22.222143 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:22.241050 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:22.241115 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:22.258165 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:03:22.258242 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:22.276561 1661480 logs.go:282] 0 containers: []
	W0804 09:03:22.276574 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:22.276617 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:22.295029 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:22.295092 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:22.312122 1661480 logs.go:282] 0 containers: []
	W0804 09:03:22.312132 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:22.312182 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:22.329412 1661480 logs.go:282] 2 containers: [ef4985b5f2b9 0bd5610c8547]
	I0804 09:03:22.329488 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:22.346310 1661480 logs.go:282] 0 containers: []
	W0804 09:03:22.346323 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:22.346333 1661480 logs.go:123] Gathering logs for kube-controller-manager [0bd5610c8547] ...
	I0804 09:03:22.346343 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd5610c8547"
	I0804 09:03:22.367806 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:22.367821 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:22.445841 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:22.445861 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:22.471474 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:22.471489 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:22.496759 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:03:22.496775 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:03:22.517309 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:22.517327 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:22.557714 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:22.557732 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:22.593146 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:22.593170 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:22.611504 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:22.611518 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:22.665839 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:22.658662   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.659228   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.660791   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.661206   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.662674   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:22.658662   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.659228   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.660791   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.661206   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.662674   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:22.665851 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:22.665861 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:22.702988 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:22.703006 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:22.755945 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:22.755968 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:25.277601 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:25.278136 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:25.278248 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:25.297160 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:25.297216 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:25.316643 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:03:25.316709 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:25.334387 1661480 logs.go:282] 0 containers: []
	W0804 09:03:25.334404 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:25.334454 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:25.351774 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:25.351842 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:25.369473 1661480 logs.go:282] 0 containers: []
	W0804 09:03:25.369485 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:25.369530 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:25.387080 1661480 logs.go:282] 2 containers: [ef4985b5f2b9 0bd5610c8547]
	I0804 09:03:25.387143 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:25.404296 1661480 logs.go:282] 0 containers: []
	W0804 09:03:25.404309 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:25.404318 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:25.404329 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:25.422982 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:25.422997 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:25.476224 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:25.468440   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.468969   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.470557   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.471704   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.472278   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:25.468440   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.468969   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.470557   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.471704   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.472278   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:25.476235 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:25.476245 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:25.501952 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:03:25.501972 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:03:25.522116 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:25.522135 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:25.559523 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:25.559539 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:25.611041 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:25.611060 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:25.631550 1661480 logs.go:123] Gathering logs for kube-controller-manager [0bd5610c8547] ...
	I0804 09:03:25.631569 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd5610c8547"
	I0804 09:03:25.652151 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:25.652168 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:25.726816 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:25.726837 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:25.752766 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:25.752786 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:25.796279 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:25.796296 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:28.337315 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:28.337785 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:28.337864 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:28.356559 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:28.356610 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:28.374336 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:03:28.374386 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:28.391793 1661480 logs.go:282] 0 containers: []
	W0804 09:03:28.391806 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:28.391847 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:28.410341 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:28.410399 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:28.427793 1661480 logs.go:282] 0 containers: []
	W0804 09:03:28.427809 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:28.427859 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:28.444847 1661480 logs.go:282] 2 containers: [ef4985b5f2b9 0bd5610c8547]
	I0804 09:03:28.444924 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:28.462592 1661480 logs.go:282] 0 containers: []
	W0804 09:03:28.462609 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:28.462619 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:28.462631 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:28.482600 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:28.482615 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:28.507602 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:03:28.507619 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:03:28.526984 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:28.526998 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:28.577894 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:28.577914 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:28.597919 1661480 logs.go:123] Gathering logs for kube-controller-manager [0bd5610c8547] ...
	I0804 09:03:28.597936 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd5610c8547"
	I0804 09:03:28.617782 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:28.617797 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:28.660530 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:28.660549 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:28.698114 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:28.698131 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:28.771090 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:28.771114 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:28.825345 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:28.818550   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.819081   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.820612   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.821003   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.822518   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:28.818550   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.819081   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.820612   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.821003   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.822518   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:28.825358 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:28.825372 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:28.851539 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:28.851559 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:31.390425 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:31.390852 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:31.390931 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:31.410612 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:31.410681 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:31.428091 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:03:31.428165 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:31.446602 1661480 logs.go:282] 0 containers: []
	W0804 09:03:31.446621 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:31.446675 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:31.464168 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:31.464223 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:31.481049 1661480 logs.go:282] 0 containers: []
	W0804 09:03:31.481063 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:31.481115 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:31.497227 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:03:31.497311 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:31.513575 1661480 logs.go:282] 0 containers: []
	W0804 09:03:31.513586 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:31.513594 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:31.513604 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:31.567139 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:31.558828   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.559407   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.561385   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.562296   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.563788   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:31.558828   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.559407   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.561385   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.562296   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.563788   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:31.567151 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:31.567162 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:31.591977 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:31.591994 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:31.644763 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:31.644783 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:31.664981 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:31.664997 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:31.708596 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:31.708616 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:31.734001 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:03:31.734019 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:03:31.753980 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:31.754000 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:31.789591 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:31.789609 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:31.825063 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:31.825082 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:31.904005 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:31.904027 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:34.424932 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:34.425333 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:34.425419 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:34.444542 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:34.444596 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:34.461912 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:03:34.461985 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:34.479889 1661480 logs.go:282] 0 containers: []
	W0804 09:03:34.479903 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:34.479953 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:34.497552 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:34.497604 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:34.515003 1661480 logs.go:282] 0 containers: []
	W0804 09:03:34.515014 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:34.515053 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:34.532842 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:03:34.532909 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:34.549350 1661480 logs.go:282] 0 containers: []
	W0804 09:03:34.549362 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:34.549371 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:34.549381 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:34.567689 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:34.567704 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:34.605688 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:34.605703 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:34.625847 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:34.625861 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:34.668000 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:34.668021 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:34.742105 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:34.742129 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:34.797022 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:34.790082   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.790655   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.792223   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.792752   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.794335   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:34.790082   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.790655   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.792223   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.792752   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.794335   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:34.797034 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:34.797047 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:34.822397 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:34.822417 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:34.849317 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:03:34.849334 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:03:34.869225 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:34.869259 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:34.923527 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:34.923548 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:37.459936 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:37.460377 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:37.460466 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:37.479380 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:37.479441 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:37.497080 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:03:37.497149 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:37.514761 1661480 logs.go:282] 0 containers: []
	W0804 09:03:37.514778 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:37.514824 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:37.532588 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:37.532656 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:37.550208 1661480 logs.go:282] 0 containers: []
	W0804 09:03:37.550224 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:37.550275 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:37.568463 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:03:37.568527 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:37.585787 1661480 logs.go:282] 0 containers: []
	W0804 09:03:37.585800 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:37.585809 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:37.585821 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:37.659045 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:37.659073 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:37.685717 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:03:37.685735 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:03:37.704291 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:37.704307 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:37.741922 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:37.741943 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:37.793694 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:37.793713 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:37.813368 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:37.813385 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:37.848883 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:37.848900 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:37.867491 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:37.867505 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:37.921199 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:37.913356   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.913927   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.916144   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.916563   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.918058   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:37.913356   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.913927   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.916144   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.916563   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.918058   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:37.921219 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:37.921231 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:37.947342 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:37.947359 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:40.489125 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:40.489554 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:40.489630 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:40.508607 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:40.508669 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:40.528138 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:03:40.528187 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:40.545305 1661480 logs.go:282] 0 containers: []
	W0804 09:03:40.545318 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:40.545357 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:40.562122 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:40.562191 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:40.579129 1661480 logs.go:282] 0 containers: []
	W0804 09:03:40.579144 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:40.579191 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:40.597048 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:03:40.597124 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:40.614353 1661480 logs.go:282] 0 containers: []
	W0804 09:03:40.614368 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:40.614378 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:03:40.614390 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:03:40.634206 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:40.634222 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:40.653989 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:40.654006 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:40.672246 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:40.672260 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:40.726229 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:40.719031   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.719524   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.721096   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.721545   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.723074   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:40.719031   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.719524   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.721096   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.721545   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.723074   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:40.726242 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:40.726257 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:40.766179 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:40.766200 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:40.821048 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:40.821069 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:40.864128 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:40.864147 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:40.900068 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:40.900085 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:40.973288 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:40.973310 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:41.000020 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:41.000039 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:43.525994 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:43.526421 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:43.526503 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:43.545290 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:43.545349 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:43.562985 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:03:43.563038 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:43.579516 1661480 logs.go:282] 0 containers: []
	W0804 09:03:43.579532 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:43.579582 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:43.597186 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:43.597261 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:43.613554 1661480 logs.go:282] 0 containers: []
	W0804 09:03:43.613568 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:43.613609 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:43.631061 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:03:43.631120 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:43.649100 1661480 logs.go:282] 0 containers: []
	W0804 09:03:43.649114 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:43.649125 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:43.649144 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:43.667561 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:43.667577 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:43.721973 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:43.714008   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.714530   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.717089   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.717552   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.719095   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:43.714008   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.714530   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.717089   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.717552   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.719095   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:43.721984 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:03:43.721995 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:03:43.742540 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:43.742556 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:43.780241 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:43.780259 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:43.834318 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:43.834339 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:43.869987 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:43.870005 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:43.946032 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:43.946053 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:43.973679 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:43.973697 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:43.998917 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:43.998935 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:44.019361 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:44.019378 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:46.564446 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:46.564898 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:46.564992 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:46.584902 1661480 logs.go:282] 3 containers: [20f5be32354b a20e277f239a a70a68ec6169]
	I0804 09:03:46.585028 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:46.610427 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:03:46.610492 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:46.627832 1661480 logs.go:282] 0 containers: []
	W0804 09:03:46.627848 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:46.627896 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:46.662895 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:46.662956 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:46.679864 1661480 logs.go:282] 0 containers: []
	W0804 09:03:46.679882 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:46.679929 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:46.697936 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:03:46.697999 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:46.716993 1661480 logs.go:282] 0 containers: []
	W0804 09:03:46.717008 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:46.717020 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:46.717029 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:46.790622 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:46.790643 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:46.809548 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:46.809566 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 09:04:08.045069 1661480 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.235482683s)
	W0804 09:04:08.045100 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:56.860697   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:04:06.861827   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:04:08.039221   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:51136->[::1]:8441: read: connection reset by peer"
	E0804 09:04:08.039948   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:08.041660   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:56.860697   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:04:06.861827   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:04:08.039221   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:51136->[::1]:8441: read: connection reset by peer"
	E0804 09:04:08.039948   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:08.041660   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:08.045109 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:08.045120 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:08.071094 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:04:08.071112 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	W0804 09:04:08.089428 1661480 logs.go:130] failed kube-apiserver [a20e277f239a]: command: /bin/bash -c "docker logs --tail 400 a20e277f239a" /bin/bash -c "docker logs --tail 400 a20e277f239a": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: a20e277f239a
	 output: 
	** stderr ** 
	Error response from daemon: No such container: a20e277f239a
	
	** /stderr **
	I0804 09:04:08.089437 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:08.089448 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:08.129150 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:08.129169 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:08.185332 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:08.185356 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:08.207810 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:08.207830 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:08.233521 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:08.233539 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:08.253969 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:08.253985 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:08.299455 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:08.299476 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:10.840062 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:10.840666 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:10.840762 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:10.860521 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:10.860576 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:10.877749 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:10.877804 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:10.894797 1661480 logs.go:282] 0 containers: []
	W0804 09:04:10.894809 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:10.894851 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:10.911920 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:10.911993 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:10.929397 1661480 logs.go:282] 0 containers: []
	W0804 09:04:10.929412 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:10.929461 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:10.947092 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:04:10.947149 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:10.964066 1661480 logs.go:282] 0 containers: []
	W0804 09:04:10.964083 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:10.964095 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:10.964107 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:10.983914 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:10.983930 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:11.020490 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:11.020510 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:11.039187 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:11.039203 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:11.095001 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:11.087446   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.087938   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.089522   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.089962   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.091585   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:11.087446   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.087938   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.089522   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.089962   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.091585   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:11.095012 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:11.095022 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:11.120789 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:11.120807 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:11.146008 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:11.146024 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:11.166112 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:11.166128 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:11.204792 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:11.204810 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:11.249456 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:11.249479 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:11.325884 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:11.325911 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:13.884709 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:13.885223 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:13.885353 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:13.904359 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:13.904417 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:13.922238 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:13.922302 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:13.939358 1661480 logs.go:282] 0 containers: []
	W0804 09:04:13.939372 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:13.939426 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:13.956853 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:13.956910 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:13.974857 1661480 logs.go:282] 0 containers: []
	W0804 09:04:13.974869 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:13.974908 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:13.992568 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:04:13.992628 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:14.009924 1661480 logs.go:282] 0 containers: []
	W0804 09:04:14.009937 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:14.009947 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:14.009962 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:14.061962 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:14.061980 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:14.105751 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:14.105768 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:14.159867 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:14.152559   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.153066   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.154592   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.154981   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.156381   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:14.152559   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.153066   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.154592   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.154981   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.156381   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:14.159880 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:14.159892 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:14.180879 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:14.180897 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:14.223204 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:14.223223 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:14.244081 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:14.244097 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:14.279867 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:14.279884 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:14.357345 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:14.357368 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:14.375771 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:14.375787 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:14.401599 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:14.401615 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:16.929311 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:16.929726 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:16.929806 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:16.949884 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:16.949946 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:16.966827 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:16.966875 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:16.984179 1661480 logs.go:282] 0 containers: []
	W0804 09:04:16.984194 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:16.984241 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:17.001543 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:17.001596 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:17.018974 1661480 logs.go:282] 0 containers: []
	W0804 09:04:17.018985 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:17.019032 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:17.037024 1661480 logs.go:282] 2 containers: [9d4ac6608b3c ef4985b5f2b9]
	I0804 09:04:17.037087 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:17.067627 1661480 logs.go:282] 0 containers: []
	W0804 09:04:17.067640 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:17.067650 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:17.067662 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:17.089231 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:17.089266 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:17.145083 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:17.137004   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.137530   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.139081   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.139547   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.141048   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:17.137004   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.137530   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.139081   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.139547   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.141048   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:17.145095 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:17.145107 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:17.183037 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:17.183057 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:17.224495 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:17.224513 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:17.277939 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:17.277961 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:17.299213 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:17.299229 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:17.343379 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:17.343397 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:17.368834 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:17.368850 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:17.388736 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:17.388752 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:17.408859 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:17.408875 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:17.445491 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:17.445507 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:20.023254 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:20.023726 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:20.023805 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:20.042775 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:20.042834 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:20.060600 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:20.060658 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:20.078019 1661480 logs.go:282] 0 containers: []
	W0804 09:04:20.078036 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:20.078074 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:20.096002 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:20.096071 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:20.112684 1661480 logs.go:282] 0 containers: []
	W0804 09:04:20.112698 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:20.112741 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:20.130951 1661480 logs.go:282] 2 containers: [9d4ac6608b3c ef4985b5f2b9]
	I0804 09:04:20.131021 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:20.147664 1661480 logs.go:282] 0 containers: []
	W0804 09:04:20.147675 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:20.147685 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:20.147696 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:20.166143 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:20.166161 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:20.221888 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:20.214386   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.214988   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.216543   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.216938   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.218460   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:20.214386   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.214988   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.216543   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.216938   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.218460   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:20.221899 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:20.221912 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:20.247606 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:20.247623 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:20.269435 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:20.269454 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:20.322915 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:20.322934 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:20.344869 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:20.344885 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:20.388193 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:20.388210 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:20.424170 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:20.424187 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:20.496074 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:20.496094 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:20.522349 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:20.522368 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:20.563687 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:20.563710 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:23.085074 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:23.085599 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:23.085689 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:23.104776 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:23.104833 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:23.122616 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:23.122682 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:23.140381 1661480 logs.go:282] 0 containers: []
	W0804 09:04:23.140396 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:23.140449 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:23.158043 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:23.158105 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:23.175945 1661480 logs.go:282] 0 containers: []
	W0804 09:04:23.175960 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:23.176004 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:23.193909 1661480 logs.go:282] 2 containers: [9d4ac6608b3c ef4985b5f2b9]
	I0804 09:04:23.193981 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:23.211258 1661480 logs.go:282] 0 containers: []
	W0804 09:04:23.211272 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:23.211282 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:23.211292 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:23.236427 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:23.236445 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:23.275922 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:23.275944 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:23.296315 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:23.296332 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:23.317009 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:23.317026 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:23.357932 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:23.357953 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:23.394105 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:23.394122 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:23.467404 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:23.467423 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:23.494717 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:23.494734 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:23.515040 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:23.515055 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:23.566202 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:23.566221 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:23.586603 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:23.586621 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:23.640949 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:23.633581   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.634121   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.635682   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.636105   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.637658   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:23.633581   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.634121   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.635682   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.636105   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.637658   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:26.142544 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:26.143011 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:26.143111 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:26.163238 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:26.163305 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:26.181526 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:26.181598 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:26.198994 1661480 logs.go:282] 0 containers: []
	W0804 09:04:26.199008 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:26.199055 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:26.216773 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:26.216843 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:26.234131 1661480 logs.go:282] 0 containers: []
	W0804 09:04:26.234150 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:26.234204 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:26.251698 1661480 logs.go:282] 2 containers: [9d4ac6608b3c ef4985b5f2b9]
	I0804 09:04:26.251757 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:26.269113 1661480 logs.go:282] 0 containers: []
	W0804 09:04:26.269125 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:26.269136 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:26.269147 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:26.309761 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:26.309780 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:26.362115 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:26.362133 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:26.382406 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:26.382421 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:26.427317 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:26.427338 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:26.445864 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:26.445879 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:26.470826 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:26.470845 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:26.490799 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:26.490814 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:26.526252 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:26.526276 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:26.599966 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:26.599993 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:26.655307 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:26.648488   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.649034   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.650536   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.650909   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.652405   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:26.648488   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.649034   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.650536   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.650909   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.652405   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:26.655322 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:26.655332 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:26.680910 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:26.680927 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:29.201316 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:29.201803 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:29.201888 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:29.220916 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:29.220981 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:29.240273 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:29.240334 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:29.258749 1661480 logs.go:282] 0 containers: []
	W0804 09:04:29.258769 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:29.258820 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:29.276728 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:29.276789 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:29.294103 1661480 logs.go:282] 0 containers: []
	W0804 09:04:29.294118 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:29.294162 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:29.312051 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:29.312121 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:29.329450 1661480 logs.go:282] 0 containers: []
	W0804 09:04:29.329463 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:29.329472 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:29.329482 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:29.406478 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:29.406501 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:29.449867 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:29.449885 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:29.505732 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:29.505753 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:29.527260 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:29.527278 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:29.568876 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:29.568900 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:29.588395 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:29.588411 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:29.642645 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:29.635519   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.636038   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.637658   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.638071   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.639537   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:29.635519   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.636038   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.637658   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.638071   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.639537   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:29.642654 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:29.642665 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:29.668637 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:29.668654 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:29.693869 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:29.693888 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:29.714488 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:29.714503 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:32.250740 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:32.251210 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:32.251290 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:32.270825 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:32.270884 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:32.288747 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:32.288802 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:32.306493 1661480 logs.go:282] 0 containers: []
	W0804 09:04:32.306505 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:32.306552 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:32.323960 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:32.324014 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:32.341171 1661480 logs.go:282] 0 containers: []
	W0804 09:04:32.341187 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:32.341230 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:32.358803 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:32.358860 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:32.375636 1661480 logs.go:282] 0 containers: []
	W0804 09:04:32.375647 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:32.375657 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:32.375670 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:32.395884 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:32.395899 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:32.438480 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:32.438499 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:32.482900 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:32.482918 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:32.518645 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:32.518662 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:32.591929 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:32.591950 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:32.644879 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:32.644899 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:32.665398 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:32.665413 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:32.684813 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:32.684830 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:32.738309 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:32.731481   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.731997   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.733547   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.733950   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.735467   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:32.731481   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.731997   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.733547   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.733950   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.735467   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:32.738320 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:32.738331 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:32.763969 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:32.763987 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:35.291352 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:35.291810 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:35.291895 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:35.311568 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:35.311636 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:35.329568 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:35.329650 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:35.347266 1661480 logs.go:282] 0 containers: []
	W0804 09:04:35.347276 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:35.347315 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:35.364992 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:35.365054 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:35.381643 1661480 logs.go:282] 0 containers: []
	W0804 09:04:35.381657 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:35.381696 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:35.398762 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:35.398830 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:35.415553 1661480 logs.go:282] 0 containers: []
	W0804 09:04:35.415568 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:35.415579 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:35.415590 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:35.434052 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:35.434066 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:35.488645 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:35.481621   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.482093   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.483610   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.483982   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.485495   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:35.481621   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.482093   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.483610   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.483982   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.485495   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:35.488656 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:35.488666 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:35.532366 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:35.532384 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:35.552538 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:35.552555 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:35.588052 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:35.588072 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:35.666164 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:35.666184 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:35.693682 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:35.693700 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:35.718989 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:35.719004 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:35.739132 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:35.739149 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:35.792779 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:35.792799 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:38.337951 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:38.338399 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:38.338478 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:38.357165 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:38.357226 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:38.374097 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:38.374155 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:38.391382 1661480 logs.go:282] 0 containers: []
	W0804 09:04:38.391396 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:38.391442 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:38.408993 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:38.409051 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:38.426050 1661480 logs.go:282] 0 containers: []
	W0804 09:04:38.426065 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:38.426108 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:38.443913 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:38.443969 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:38.460846 1661480 logs.go:282] 0 containers: []
	W0804 09:04:38.460858 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:38.460868 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:38.460883 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:38.538741 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:38.538763 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:38.557324 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:38.557344 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:38.611322 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:38.604134   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.604668   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.606185   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.606583   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.607975   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:38.604134   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.604668   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.606185   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.606583   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.607975   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:38.611333 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:38.611344 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:38.651785 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:38.651803 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:38.704282 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:38.704300 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:38.748296 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:38.748316 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:38.788934 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:38.788954 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:38.813911 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:38.813928 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:38.838936 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:38.838953 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:38.858717 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:38.858736 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:41.379671 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:41.380124 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:41.380209 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:41.398983 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:41.399040 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:41.417150 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:41.417203 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:41.434806 1661480 logs.go:282] 0 containers: []
	W0804 09:04:41.434819 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:41.434860 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:41.452250 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:41.452314 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:41.469520 1661480 logs.go:282] 0 containers: []
	W0804 09:04:41.469535 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:41.469583 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:41.487739 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:41.487809 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:41.505191 1661480 logs.go:282] 0 containers: []
	W0804 09:04:41.505207 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:41.505219 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:41.505231 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:41.525061 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:41.525078 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:41.560648 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:41.560665 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:41.586056 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:41.586076 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:41.606348 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:41.606364 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:41.647048 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:41.647072 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:41.688983 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:41.689004 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:41.770298 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:41.770332 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:41.790956 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:41.790978 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:41.845157 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:41.838079   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.838593   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.840185   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.840709   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.842215   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:41.838079   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.838593   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.840185   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.840709   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.842215   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:41.845168 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:41.845179 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:41.870756 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:41.870774 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:44.425368 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:44.425831 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:44.425949 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:44.446645 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:44.446699 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:44.464564 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:44.464619 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:44.482513 1661480 logs.go:282] 0 containers: []
	W0804 09:04:44.482525 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:44.482568 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:44.500219 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:44.500270 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:44.517554 1661480 logs.go:282] 0 containers: []
	W0804 09:04:44.517571 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:44.517623 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:44.535531 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:44.535609 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:44.552895 1661480 logs.go:282] 0 containers: []
	W0804 09:04:44.552911 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:44.552922 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:44.552937 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:44.588906 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:44.588923 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:44.668044 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:44.668073 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:44.688833 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:44.688850 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:44.744103 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:44.737229   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.737782   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.739326   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.739679   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.741202   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:44.737229   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.737782   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.739326   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.739679   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.741202   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:44.744120 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:44.744132 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:44.771558 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:44.771575 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:44.798390 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:44.798407 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:44.818712 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:44.818730 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:44.860754 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:44.860771 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:44.903154 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:44.903172 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:44.959593 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:44.959614 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:47.481798 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:47.482267 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:47.482394 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:47.501436 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:47.501507 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:47.519403 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:47.519456 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:47.536505 1661480 logs.go:282] 0 containers: []
	W0804 09:04:47.536517 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:47.536559 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:47.555052 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:47.555108 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:47.572292 1661480 logs.go:282] 0 containers: []
	W0804 09:04:47.572308 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:47.572378 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:47.589316 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:47.589387 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:47.606568 1661480 logs.go:282] 0 containers: []
	W0804 09:04:47.606583 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:47.606592 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:47.606605 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:47.660924 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:47.654305   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.654756   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.656225   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.656600   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.658040   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:47.654305   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.654756   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.656225   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.656600   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.658040   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:47.660934 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:47.660945 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:47.686316 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:47.686336 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:47.711494 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:47.711510 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:47.755256 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:47.755279 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:47.808519 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:47.808541 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:47.829575 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:47.829592 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:47.850735 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:47.850752 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:47.892056 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:47.892076 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:47.929604 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:47.929623 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:48.003755 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:48.003779 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:50.522949 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:50.523426 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:50.523511 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:50.542559 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:50.542623 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:50.561817 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:50.561873 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:50.580293 1661480 logs.go:282] 0 containers: []
	W0804 09:04:50.580306 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:50.580358 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:50.598065 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:50.598132 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:50.615051 1661480 logs.go:282] 0 containers: []
	W0804 09:04:50.615064 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:50.615102 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:50.634158 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:50.634219 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:50.651067 1661480 logs.go:282] 0 containers: []
	W0804 09:04:50.651079 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:50.651088 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:50.651098 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:50.675452 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:50.675468 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:50.696108 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:50.696124 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:50.739266 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:50.739285 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:50.757817 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:50.757839 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:50.812181 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:50.805280   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.805733   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.807319   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.807746   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.809261   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:50.805280   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.805733   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.807319   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.807746   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.809261   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:50.812192 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:50.812204 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:50.837813 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:50.837830 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:50.881332 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:50.881350 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:50.933150 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:50.933172 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:50.955107 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:50.955127 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:50.991284 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:50.991302 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:53.570964 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:53.571444 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:53.571539 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:53.591352 1661480 logs.go:282] 3 containers: [45dd8fe239bc 20f5be32354b a70a68ec6169]
	I0804 09:04:53.591419 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:53.610707 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:53.610764 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:53.630949 1661480 logs.go:282] 0 containers: []
	W0804 09:04:53.630964 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:53.631011 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:53.665523 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:53.665599 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:53.683393 1661480 logs.go:282] 0 containers: []
	W0804 09:04:53.683410 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:53.683463 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:53.700974 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:53.701080 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:53.719520 1661480 logs.go:282] 0 containers: []
	W0804 09:04:53.719534 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:53.719543 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:53.719556 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:53.801389 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:53.801410 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 09:05:15.553212 1661480 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.751766465s)
	W0804 09:05:15.553274 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:03.857554   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:05:13.859266   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:05:15.547844   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:55018->[::1]:8441: read: connection reset by peer"
	E0804 09:05:15.548469   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:15.550082   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:03.857554   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:05:13.859266   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:05:15.547844   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:55018->[::1]:8441: read: connection reset by peer"
	E0804 09:05:15.548469   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:15.550082   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:15.553282 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:05:15.553295 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	W0804 09:05:15.571925 1661480 logs.go:130] failed kube-apiserver [20f5be32354b]: command: /bin/bash -c "docker logs --tail 400 20f5be32354b" /bin/bash -c "docker logs --tail 400 20f5be32354b": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 20f5be32354b
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 20f5be32354b
	
	** /stderr **
	I0804 09:05:15.571940 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:15.571956 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:15.597489 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:05:15.597508 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	W0804 09:05:15.615861 1661480 logs.go:130] failed etcd [e4c966ab8463]: command: /bin/bash -c "docker logs --tail 400 e4c966ab8463" /bin/bash -c "docker logs --tail 400 e4c966ab8463": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: e4c966ab8463
	 output: 
	** stderr ** 
	Error response from daemon: No such container: e4c966ab8463
	
	** /stderr **
	I0804 09:05:15.615870 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:15.615881 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:15.658508 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:15.658527 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:15.710914 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:15.710934 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:15.756829 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:15.756848 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:15.775591 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:15.775608 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:15.802209 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:15.802225 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:15.822675 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:15.822691 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:18.362881 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:18.363337 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:18.363427 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:18.382725 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:18.382780 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:18.400834 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:18.400903 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:18.418630 1661480 logs.go:282] 0 containers: []
	W0804 09:05:18.418643 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:18.418699 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:18.436449 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:18.436510 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:18.453593 1661480 logs.go:282] 0 containers: []
	W0804 09:05:18.453609 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:18.453670 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:18.470809 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:18.470867 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:18.487902 1661480 logs.go:282] 0 containers: []
	W0804 09:05:18.487915 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:18.487925 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:18.487935 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:18.570521 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:18.570543 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:18.625182 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:18.618258   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.618805   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.620328   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.620711   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.622272   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:18.618258   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.618805   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.620328   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.620711   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.622272   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:18.625193 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:18.625204 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:18.651165 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:18.651185 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:18.671188 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:18.671203 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:18.714383 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:18.714403 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:18.750997 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:18.751016 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:18.769854 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:18.769870 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:18.795165 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:18.795180 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:18.849360 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:18.849380 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:18.871229 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:18.871254 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:21.418353 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:21.418833 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:21.418922 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:21.438054 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:21.438113 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:21.455587 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:21.455654 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:21.472934 1661480 logs.go:282] 0 containers: []
	W0804 09:05:21.472954 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:21.473001 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:21.491717 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:21.491795 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:21.509543 1661480 logs.go:282] 0 containers: []
	W0804 09:05:21.509559 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:21.509604 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:21.527160 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:21.527217 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:21.544207 1661480 logs.go:282] 0 containers: []
	W0804 09:05:21.544222 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:21.544234 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:21.544243 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:21.563890 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:21.563904 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:21.583720 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:21.583737 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:21.602128 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:21.602141 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:21.658059 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:21.650567   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.651103   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.652665   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.653107   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.654674   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:21.650567   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.651103   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.652665   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.653107   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.654674   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:21.658074 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:21.658084 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:21.685555 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:21.685574 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:21.712525 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:21.712541 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:21.756390 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:21.756410 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:21.810403 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:21.810424 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:21.853991 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:21.854013 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:21.889567 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:21.889585 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:24.473851 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:24.474320 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:24.474415 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:24.493643 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:24.493706 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:24.511933 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:24.511991 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:24.529775 1661480 logs.go:282] 0 containers: []
	W0804 09:05:24.529790 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:24.529844 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:24.547893 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:24.547953 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:24.565265 1661480 logs.go:282] 0 containers: []
	W0804 09:05:24.565280 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:24.565322 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:24.582372 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:24.582439 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:24.600116 1661480 logs.go:282] 0 containers: []
	W0804 09:05:24.600132 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:24.600144 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:24.600157 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:24.625394 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:24.625413 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:24.649921 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:24.649938 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:24.669931 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:24.669947 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:24.724632 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:24.717099   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.717627   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.719144   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.719576   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.721085   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:24.717099   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.717627   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.719144   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.719576   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.721085   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:24.724643 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:24.724654 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:24.745114 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:24.745130 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:24.791138 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:24.791159 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:24.844211 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:24.844232 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:24.864815 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:24.864831 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:24.905868 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:24.905889 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:24.944193 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:24.944210 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:27.526606 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:27.527052 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:27.527133 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:27.546023 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:27.546102 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:27.564059 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:27.564125 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:27.581355 1661480 logs.go:282] 0 containers: []
	W0804 09:05:27.581372 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:27.581421 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:27.598969 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:27.599042 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:27.616326 1661480 logs.go:282] 0 containers: []
	W0804 09:05:27.616340 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:27.616398 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:27.633567 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:27.633636 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:27.650100 1661480 logs.go:282] 0 containers: []
	W0804 09:05:27.650116 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:27.650129 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:27.650143 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:27.674675 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:27.674691 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:27.694432 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:27.694452 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:27.740275 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:27.740293 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:27.792672 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:27.792692 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:27.837134 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:27.837152 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:27.862402 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:27.862418 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:27.884136 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:27.884160 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:27.921302 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:27.921320 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:28.005198 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:28.005221 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:28.024305 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:28.024319 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:28.078812 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:28.071766   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.072266   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.073814   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.074266   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.075728   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:28.071766   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.072266   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.073814   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.074266   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.075728   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:30.579425 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:30.579882 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:30.579979 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:30.599053 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:30.599118 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:30.616639 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:30.616706 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:30.634419 1661480 logs.go:282] 0 containers: []
	W0804 09:05:30.634434 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:30.634478 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:30.652037 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:30.652091 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:30.668537 1661480 logs.go:282] 0 containers: []
	W0804 09:05:30.668550 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:30.668601 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:30.686111 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:30.686177 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:30.703170 1661480 logs.go:282] 0 containers: []
	W0804 09:05:30.703183 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:30.703197 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:30.703208 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:30.780512 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:30.780534 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:30.835862 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:30.828571   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.829089   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.830648   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.831084   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.832656   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:30.828571   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.829089   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.830648   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.831084   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.832656   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:30.835871 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:30.835884 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:30.862953 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:30.862971 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:30.906430 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:30.906449 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:30.962204 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:30.962222 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:30.983077 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:30.983098 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:31.027250 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:31.027271 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:31.064477 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:31.064493 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:31.082683 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:31.082700 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:31.107897 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:31.107916 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:33.629309 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:33.629783 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:33.629874 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:33.649062 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:33.649144 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:33.667342 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:33.667406 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:33.684879 1661480 logs.go:282] 0 containers: []
	W0804 09:05:33.684891 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:33.684936 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:33.702256 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:33.702310 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:33.719436 1661480 logs.go:282] 0 containers: []
	W0804 09:05:33.719447 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:33.719486 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:33.737005 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:33.737062 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:33.754700 1661480 logs.go:282] 0 containers: []
	W0804 09:05:33.754716 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:33.754728 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:33.754740 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:33.830846 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:33.830868 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:33.856980 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:33.856997 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:33.909389 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:33.909410 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:33.929778 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:33.929794 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:33.965678 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:33.965696 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:33.984178 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:33.984194 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:34.038018 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:34.031060   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.031554   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.033042   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.033546   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.035064   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:34.031060   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.031554   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.033042   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.033546   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.035064   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:34.038028 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:34.038040 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:34.065147 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:34.065164 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:34.085201 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:34.085217 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:34.131576 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:34.131598 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:36.677320 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:36.677738 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:36.677816 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:36.696778 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:36.696834 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:36.714338 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:36.714400 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:36.731585 1661480 logs.go:282] 0 containers: []
	W0804 09:05:36.731597 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:36.731648 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:36.749262 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:36.749323 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:36.766369 1661480 logs.go:282] 0 containers: []
	W0804 09:05:36.766382 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:36.766424 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:36.783683 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:36.783747 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:36.800562 1661480 logs.go:282] 0 containers: []
	W0804 09:05:36.800577 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:36.800589 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:36.800601 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:36.826322 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:36.826341 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:36.846705 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:36.846725 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:36.900647 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:36.900670 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:36.945061 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:36.945082 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:36.980935 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:36.980953 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:36.999355 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:36.999370 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:37.045302 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:37.045321 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:37.066069 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:37.066087 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:37.147619 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:37.147641 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:37.204004 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:37.196190   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.197826   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.198292   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.199819   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.200207   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:37.196190   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.197826   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.198292   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.199819   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.200207   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:37.204017 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:37.204029 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:39.729976 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:39.730386 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:39.730457 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:39.749322 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:39.749391 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:39.767341 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:39.767399 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:39.783917 1661480 logs.go:282] 0 containers: []
	W0804 09:05:39.783928 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:39.783968 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:39.801060 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:39.801127 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:39.818194 1661480 logs.go:282] 0 containers: []
	W0804 09:05:39.818205 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:39.818259 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:39.835049 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:39.835119 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:39.851781 1661480 logs.go:282] 0 containers: []
	W0804 09:05:39.851792 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:39.851802 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:39.851811 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:39.871504 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:39.871519 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:39.926544 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:39.919634   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.920101   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.921669   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.922050   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.923665   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:39.919634   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.920101   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.921669   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.922050   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.923665   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:39.926554 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:39.926565 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:39.952624 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:39.952638 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:39.972011 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:39.972027 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:40.025874 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:40.025896 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:40.109801 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:40.109821 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:40.136255 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:40.136272 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:40.183580 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:40.183599 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:40.204493 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:40.204511 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:40.248273 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:40.248291 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:42.784699 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:42.785199 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:42.785329 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:42.804095 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:42.804174 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:42.821904 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:42.821955 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:42.839033 1661480 logs.go:282] 0 containers: []
	W0804 09:05:42.839045 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:42.839085 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:42.857060 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:42.857129 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:42.874536 1661480 logs.go:282] 0 containers: []
	W0804 09:05:42.874549 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:42.874606 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:42.892601 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:42.892659 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:42.910100 1661480 logs.go:282] 0 containers: []
	W0804 09:05:42.910120 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:42.910129 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:42.910139 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:42.934869 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:42.934885 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:42.953955 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:42.953974 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:43.006663 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:43.006683 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:43.053918 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:43.053939 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:43.090417 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:43.090434 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:43.174196 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:43.174219 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:43.192681 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:43.192699 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:43.248572 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:43.241692   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.242267   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.243809   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.244176   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.245595   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:43.241692   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.242267   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.243809   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.244176   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.245595   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:43.248582 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:43.248595 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:43.273840 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:43.273857 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:43.317403 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:43.317424 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:45.839142 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:45.839624 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:45.839725 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:45.858871 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:45.858933 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:45.877176 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:45.877228 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:45.894585 1661480 logs.go:282] 0 containers: []
	W0804 09:05:45.894599 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:45.894640 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:45.911858 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:45.911915 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:45.929219 1661480 logs.go:282] 0 containers: []
	W0804 09:05:45.929231 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:45.929293 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:45.946407 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:45.946463 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:45.964503 1661480 logs.go:282] 0 containers: []
	W0804 09:05:45.964514 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:45.964524 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:45.964532 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:46.041227 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:46.041258 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:46.096253 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:46.089547   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.090076   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.091586   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.091864   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.093286   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:46.089547   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.090076   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.091586   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.091864   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.093286   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:46.096264 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:46.096275 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:46.121027 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:46.121043 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:46.140652 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:46.140668 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:46.184099 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:46.184117 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:46.239471 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:46.239498 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:46.260203 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:46.260218 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:46.304661 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:46.304683 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:46.322929 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:46.322946 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:46.349597 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:46.349614 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:48.889394 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:48.889879 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:48.889967 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:48.909391 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:48.909453 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:48.927208 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:48.927271 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:48.944578 1661480 logs.go:282] 0 containers: []
	W0804 09:05:48.944589 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:48.944627 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:48.962359 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:48.962441 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:48.979597 1661480 logs.go:282] 0 containers: []
	W0804 09:05:48.979608 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:48.979646 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:48.996244 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:48.996323 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:49.013599 1661480 logs.go:282] 0 containers: []
	W0804 09:05:49.013613 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:49.013624 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:49.013644 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:49.033537 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:49.033554 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:49.086196 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:49.086216 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:49.106369 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:49.106383 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:49.141789 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:49.141805 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:49.221717 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:49.221741 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:49.276646 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:49.269311   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.269820   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.271422   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.271819   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.273274   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:49.269311   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.269820   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.271422   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.271819   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.273274   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:49.276656 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:49.276670 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:49.321356 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:49.321377 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:49.365595 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:49.365613 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:49.384099 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:49.384117 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:49.411209 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:49.411228 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:51.937395 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:51.937838 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:51.937922 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:51.956704 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:51.956769 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:51.974346 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:51.974399 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:51.991495 1661480 logs.go:282] 0 containers: []
	W0804 09:05:51.991507 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:51.991549 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:52.011643 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:52.011711 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:52.029478 1661480 logs.go:282] 0 containers: []
	W0804 09:05:52.029490 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:52.029540 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:52.046644 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:52.046722 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:52.064950 1661480 logs.go:282] 0 containers: []
	W0804 09:05:52.064963 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:52.064974 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:52.064986 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:52.121641 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:52.121666 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:52.207435 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:52.207466 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:52.234341 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:52.234364 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:52.254927 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:52.254946 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:52.298877 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:52.298897 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:52.334848 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:52.334867 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:52.353549 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:52.353565 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:52.406664 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:52.399095   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.399713   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.400815   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.402371   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.402719   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:52.399095   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.399713   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.400815   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.402371   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.402719   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:52.406679 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:52.406689 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:52.432229 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:52.432246 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:52.451833 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:52.451848 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:55.009056 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:55.009576 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:55.009639 1661480 kubeadm.go:593] duration metric: took 4m5.290563198s to restartPrimaryControlPlane
	W0804 09:05:55.009718 1661480 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0804 09:05:55.009762 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0804 09:05:55.871445 1661480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 09:05:55.882275 1661480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 09:05:55.890471 1661480 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0804 09:05:55.890520 1661480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 09:05:55.898415 1661480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 09:05:55.898428 1661480 kubeadm.go:157] found existing configuration files:
	
	I0804 09:05:55.898465 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0804 09:05:55.906151 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 09:05:55.906189 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 09:05:55.913607 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0804 09:05:55.921040 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 09:05:55.921073 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 09:05:55.928201 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0804 09:05:55.936065 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 09:05:55.936113 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 09:05:55.943534 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0804 09:05:55.951211 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 09:05:55.951253 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 09:05:55.958383 1661480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0804 09:05:55.991847 1661480 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0-beta.0
	I0804 09:05:55.991901 1661480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 09:05:56.004623 1661480 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0804 09:05:56.004692 1661480 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0804 09:05:56.004732 1661480 kubeadm.go:310] OS: Linux
	I0804 09:05:56.004768 1661480 kubeadm.go:310] CGROUPS_CPU: enabled
	I0804 09:05:56.004807 1661480 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0804 09:05:56.004862 1661480 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0804 09:05:56.004941 1661480 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0804 09:05:56.005006 1661480 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0804 09:05:56.005083 1661480 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0804 09:05:56.005137 1661480 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0804 09:05:56.005193 1661480 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0804 09:05:56.005278 1661480 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0804 09:05:56.054357 1661480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 09:05:56.054479 1661480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 09:05:56.054635 1661480 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 09:05:56.064998 1661480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 09:05:56.067952 1661480 out.go:235]   - Generating certificates and keys ...
	I0804 09:05:56.068027 1661480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 09:05:56.068074 1661480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 09:05:56.068144 1661480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 09:05:56.068209 1661480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 09:05:56.068279 1661480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 09:05:56.068322 1661480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 09:05:56.068385 1661480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 09:05:56.068433 1661480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 09:05:56.068492 1661480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 09:05:56.068549 1661480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 09:05:56.068580 1661480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 09:05:56.068624 1661480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 09:05:56.846466 1661480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 09:05:57.293494 1661480 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 09:05:57.586648 1661480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 09:05:57.707352 1661480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 09:05:58.140308 1661480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 09:05:58.141365 1661480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 09:05:58.143879 1661480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 09:05:58.146322 1661480 out.go:235]   - Booting up control plane ...
	I0804 09:05:58.146440 1661480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 09:05:58.146521 1661480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 09:05:58.146580 1661480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 09:05:58.157812 1661480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 09:05:58.157949 1661480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0804 09:05:58.163040 1661480 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0804 09:05:58.163314 1661480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 09:05:58.163387 1661480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 09:05:58.241217 1661480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 09:05:58.241378 1661480 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 09:05:59.242975 1661480 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001870906s
	I0804 09:05:59.246768 1661480 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0804 09:05:59.246925 1661480 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I0804 09:05:59.247072 1661480 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0804 09:05:59.247191 1661480 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0804 09:06:00.899560 1661480 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.652491519s
	I0804 09:06:31.896796 1661480 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 32.64974442s
	I0804 09:09:59.247676 1661480 kubeadm.go:310] [control-plane-check] kube-apiserver is not healthy after 4m0.000445769s
	I0804 09:09:59.247761 1661480 kubeadm.go:310] 
	I0804 09:09:59.247995 1661480 kubeadm.go:310] A control plane component may have crashed or exited when started by the container runtime.
	I0804 09:09:59.248237 1661480 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 09:09:59.248440 1661480 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0804 09:09:59.248589 1661480 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	I0804 09:09:59.248701 1661480 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0804 09:09:59.248843 1661480 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	I0804 09:09:59.248851 1661480 kubeadm.go:310] 
	I0804 09:09:59.251561 1661480 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0804 09:09:59.251846 1661480 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0804 09:09:59.251983 1661480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 09:09:59.252295 1661480 kubeadm.go:310] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:09:59.252358 1661480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0804 09:09:59.252583 1661480 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001870906s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.652491519s
	[control-plane-check] kube-scheduler is healthy after 32.64974442s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000445769s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	I0804 09:09:59.252631 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0804 09:10:00.037426 1661480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 09:10:00.048756 1661480 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0804 09:10:00.048799 1661480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 09:10:00.056703 1661480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 09:10:00.056711 1661480 kubeadm.go:157] found existing configuration files:
	
	I0804 09:10:00.056746 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0804 09:10:00.064271 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 09:10:00.064310 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 09:10:00.071720 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0804 09:10:00.079478 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 09:10:00.079512 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 09:10:00.086675 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0804 09:10:00.094268 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 09:10:00.094310 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 09:10:00.101549 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0804 09:10:00.108748 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 09:10:00.108780 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 09:10:00.115895 1661480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0804 09:10:00.150607 1661480 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0-beta.0
	I0804 09:10:00.150679 1661480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 09:10:00.163722 1661480 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0804 09:10:00.163786 1661480 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0804 09:10:00.163846 1661480 kubeadm.go:310] OS: Linux
	I0804 09:10:00.163909 1661480 kubeadm.go:310] CGROUPS_CPU: enabled
	I0804 09:10:00.163960 1661480 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0804 09:10:00.164019 1661480 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0804 09:10:00.164060 1661480 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0804 09:10:00.164099 1661480 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0804 09:10:00.164143 1661480 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0804 09:10:00.164177 1661480 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0804 09:10:00.164213 1661480 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0804 09:10:00.164247 1661480 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0804 09:10:00.214655 1661480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 09:10:00.214804 1661480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 09:10:00.214924 1661480 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 09:10:00.225204 1661480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 09:10:00.228114 1661480 out.go:235]   - Generating certificates and keys ...
	I0804 09:10:00.228235 1661480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 09:10:00.228353 1661480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 09:10:00.228472 1661480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 09:10:00.228537 1661480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 09:10:00.228597 1661480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 09:10:00.228639 1661480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 09:10:00.228694 1661480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 09:10:00.228785 1661480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 09:10:00.228876 1661480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 09:10:00.228943 1661480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 09:10:00.228999 1661480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 09:10:00.229083 1661480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 09:10:00.330549 1661480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 09:10:00.508036 1661480 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 09:10:00.741967 1661480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 09:10:01.526835 1661480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 09:10:01.662111 1661480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 09:10:01.662652 1661480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 09:10:01.664702 1661480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 09:10:01.666272 1661480 out.go:235]   - Booting up control plane ...
	I0804 09:10:01.666353 1661480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 09:10:01.666413 1661480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 09:10:01.667084 1661480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 09:10:01.679192 1661480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 09:10:01.679268 1661480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0804 09:10:01.684800 1661480 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0804 09:10:01.685864 1661480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 09:10:01.685922 1661480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 09:10:01.773321 1661480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 09:10:01.773477 1661480 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 09:10:02.774854 1661480 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001670583s
	I0804 09:10:02.777450 1661480 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0804 09:10:02.777542 1661480 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I0804 09:10:02.777645 1661480 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0804 09:10:02.777709 1661480 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0804 09:10:06.220867 1661480 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.44333807s
	I0804 09:10:36.606673 1661480 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 33.829135405s
	I0804 09:14:02.777907 1661480 kubeadm.go:310] [control-plane-check] kube-apiserver is not healthy after 4m0.000246349s
	I0804 09:14:02.777973 1661480 kubeadm.go:310] 
	I0804 09:14:02.778102 1661480 kubeadm.go:310] A control plane component may have crashed or exited when started by the container runtime.
	I0804 09:14:02.778204 1661480 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 09:14:02.778303 1661480 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0804 09:14:02.778415 1661480 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	I0804 09:14:02.778499 1661480 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0804 09:14:02.778604 1661480 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	I0804 09:14:02.778614 1661480 kubeadm.go:310] 
	I0804 09:14:02.781964 1661480 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0804 09:14:02.782147 1661480 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0804 09:14:02.782232 1661480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 09:14:02.782512 1661480 kubeadm.go:310] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I0804 09:14:02.782622 1661480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 09:14:02.782672 1661480 kubeadm.go:394] duration metric: took 12m13.088610065s to StartCluster
	I0804 09:14:02.782740 1661480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 09:14:02.782800 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 09:14:02.821166 1661480 cri.go:89] found id: "c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e"
	I0804 09:14:02.821177 1661480 cri.go:89] found id: ""
	I0804 09:14:02.821190 1661480 logs.go:282] 1 containers: [c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e]
	I0804 09:14:02.821273 1661480 ssh_runner.go:195] Run: which crictl
	I0804 09:14:02.824824 1661480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 09:14:02.824881 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 09:14:02.861272 1661480 cri.go:89] found id: "0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1"
	I0804 09:14:02.861286 1661480 cri.go:89] found id: ""
	I0804 09:14:02.861293 1661480 logs.go:282] 1 containers: [0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1]
	I0804 09:14:02.861334 1661480 ssh_runner.go:195] Run: which crictl
	I0804 09:14:02.864640 1661480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 09:14:02.864684 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 09:14:02.896631 1661480 cri.go:89] found id: ""
	I0804 09:14:02.896648 1661480 logs.go:282] 0 containers: []
	W0804 09:14:02.896654 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:14:02.896660 1661480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 09:14:02.896720 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 09:14:02.929947 1661480 cri.go:89] found id: "ab71ff54628ca4f3cc1b1899a47413213d9243417fab01b5da5600c18c93458e"
	I0804 09:14:02.929961 1661480 cri.go:89] found id: ""
	I0804 09:14:02.929970 1661480 logs.go:282] 1 containers: [ab71ff54628ca4f3cc1b1899a47413213d9243417fab01b5da5600c18c93458e]
	I0804 09:14:02.930026 1661480 ssh_runner.go:195] Run: which crictl
	I0804 09:14:02.933377 1661480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 09:14:02.933429 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 09:14:02.966936 1661480 cri.go:89] found id: ""
	I0804 09:14:02.966951 1661480 logs.go:282] 0 containers: []
	W0804 09:14:02.966958 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:14:02.966962 1661480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 09:14:02.967020 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 09:14:02.998599 1661480 cri.go:89] found id: "19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec"
	I0804 09:14:02.998613 1661480 cri.go:89] found id: ""
	I0804 09:14:02.998622 1661480 logs.go:282] 1 containers: [19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec]
	I0804 09:14:02.998668 1661480 ssh_runner.go:195] Run: which crictl
	I0804 09:14:03.002053 1661480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 09:14:03.002114 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 09:14:03.033926 1661480 cri.go:89] found id: ""
	I0804 09:14:03.033944 1661480 logs.go:282] 0 containers: []
	W0804 09:14:03.033953 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:14:03.033973 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:14:03.033985 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:14:03.052185 1661480 logs.go:123] Gathering logs for kube-scheduler [ab71ff54628ca4f3cc1b1899a47413213d9243417fab01b5da5600c18c93458e] ...
	I0804 09:14:03.052200 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab71ff54628ca4f3cc1b1899a47413213d9243417fab01b5da5600c18c93458e"
	I0804 09:14:03.109809 1661480 logs.go:123] Gathering logs for kube-controller-manager [19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec] ...
	I0804 09:14:03.109829 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec"
	I0804 09:14:03.144087 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:14:03.144103 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:14:03.194929 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:14:03.194949 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:14:03.230465 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:14:03.230483 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:14:03.308846 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:14:03.308871 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:14:03.364644 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:14:03.357491   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.358045   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.359651   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.360110   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.361657   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:14:03.357491   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.358045   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.359651   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.360110   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.361657   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:14:03.364660 1661480 logs.go:123] Gathering logs for kube-apiserver [c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e] ...
	I0804 09:14:03.364672 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e"
	I0804 09:14:03.404334 1661480 logs.go:123] Gathering logs for etcd [0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1] ...
	I0804 09:14:03.404352 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1"
	W0804 09:14:03.438012 1661480 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001670583s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 3.44333807s
	[control-plane-check] kube-scheduler is healthy after 33.829135405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000246349s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	W0804 09:14:03.438066 1661480 out.go:270] * 
	W0804 09:14:03.438175 1661480 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001670583s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 3.44333807s
	[control-plane-check] kube-scheduler is healthy after 33.829135405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000246349s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 09:14:03.438197 1661480 out.go:270] * 
	W0804 09:14:03.440048 1661480 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 09:14:03.443944 1661480 out.go:201] 
	W0804 09:14:03.444897 1661480 out.go:270] X Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001670583s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 3.44333807s
	[control-plane-check] kube-scheduler is healthy after 33.829135405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000246349s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 09:14:03.444921 1661480 out.go:270] * 
	W0804 09:14:03.446546 1661480 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 09:14:03.447852 1661480 out.go:201] 
	
	
	==> Docker <==
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.787995733Z" level=info msg="ignoring event" container=f1bd416cdc841c08268e4a5cc39ad5a59cc0a90b637768c23bba55fc61dfe5c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.834457529Z" level=info msg="ignoring event" container=e5c110c6a30cdc8999b8b044af4d1ddbb8d18f91cb064a1ebe54d22157751829 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.885743027Z" level=info msg="ignoring event" container=e13433a1e498749e89b61d95e4e808ac592ff0f1590fa6a6796cb547fa62b353 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.942900152Z" level=info msg="ignoring event" container=0dbe96ba02a76e8c83b519e0f5e45430250b1274660db94c7535b17780b8b6a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.996443176Z" level=info msg="ignoring event" container=65a02a714ffa74a76d877f2f692a10085ec7c8de0a017440b9efab00ad27e971 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d4d4b2be5907ada8d86373ea4112563c2759616d61b4a3818a35c5e172d53a14/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c3e3744dc769f21f2dd24654e1beecb6bfea7f8fdbb934aece5c0de776222793/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b2655ec5482c692bf93620fb4f296ae1f6e6322e8ac4d9bc5b6eb4deb7959758/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3a21deea3bd6d0ed2e1f870c1f36ae32ec63d20d02b5d6f7c0acfdbaa8f8b941/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 dockerd[11071]: time="2025-08-04T09:10:03.575810667Z" level=info msg="ignoring event" container=b425fd9606261cc933d38c743338a7166df00b74150ec90a06efaa88ed8fc7b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:04 functional-699837 dockerd[11071]: time="2025-08-04T09:10:04.004048987Z" level=info msg="ignoring event" container=6405868ef96be39062f80dc7747b60785a54bddc511237239054e6857dfb60f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:25 functional-699837 dockerd[11071]: time="2025-08-04T09:10:25.604145123Z" level=info msg="ignoring event" container=fa805a11775898f3d55fe7aac1621ef34f65e4c5d265b91d14f1aac398eb73e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:25 functional-699837 dockerd[11071]: time="2025-08-04T09:10:25.760949608Z" level=info msg="ignoring event" container=f96509d0b4a5c44670e00704a788094c91d7b771e339e28bcbb4c72c5b3337f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:46 functional-699837 dockerd[11071]: time="2025-08-04T09:10:46.592786531Z" level=info msg="ignoring event" container=f4baa19e4e176c92972f5c522b74a59ccb787659ec18793a2507e5f3eb51c18e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:47 functional-699837 dockerd[11071]: time="2025-08-04T09:10:47.616507681Z" level=info msg="ignoring event" container=25c1c03e2a156d302903662e106257ad86e1a932fc60405f41533a9012305264 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:55 functional-699837 dockerd[11071]: time="2025-08-04T09:10:55.761109664Z" level=info msg="ignoring event" container=c26a4a47aeb6e114017bda7b18b81d29e691be9cb646b2d0563767522b4243e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:59 functional-699837 dockerd[11071]: time="2025-08-04T09:10:59.048340949Z" level=info msg="ignoring event" container=5782c2a66cdd131809b7afdb2a669ecdc6104e397476ab6668c189dd853d9135 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:23 functional-699837 dockerd[11071]: time="2025-08-04T09:11:23.680443620Z" level=info msg="ignoring event" container=8b79556a690891c36a658f03ea970153fdb49c95eddd24f9241c3648decbc9ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:34 functional-699837 dockerd[11071]: time="2025-08-04T09:11:34.704315507Z" level=info msg="ignoring event" container=b2c8622eb896520d559e06ff8656f4690c8183e99d4c298a76889fb2e1f0ebf7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:41 functional-699837 dockerd[11071]: time="2025-08-04T09:11:41.762186466Z" level=info msg="ignoring event" container=bc29e58366f3b736cc21b6d0cc45970040b105936cf9045300d75e3e3fc5a723 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:12:15 functional-699837 dockerd[11071]: time="2025-08-04T09:12:15.453114207Z" level=info msg="ignoring event" container=9fa5f5eeba93beb44bb9b23ec48553aaea94d0f30b5d2c53f2f15b77b1d7977c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:12:26 functional-699837 dockerd[11071]: time="2025-08-04T09:12:26.472269528Z" level=info msg="ignoring event" container=91a0d13be39f38898491d381b24367c6e8aed57bbdcaf093ac956972d4c853ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:07 functional-699837 dockerd[11071]: time="2025-08-04T09:13:07.763715484Z" level=info msg="ignoring event" container=0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:21 functional-699837 dockerd[11071]: time="2025-08-04T09:13:21.094277794Z" level=info msg="ignoring event" container=c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:29 functional-699837 dockerd[11071]: time="2025-08-04T09:13:29.764267638Z" level=info msg="ignoring event" container=19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	19b815a4b1b28       9ad783615e1bc       57 seconds ago       Exited              kube-controller-manager   4                   b2655ec5482c6       kube-controller-manager-functional-699837
	0e5a036fd8651       1e30c0b1e9b99       58 seconds ago       Exited              etcd                      5                   d4d4b2be5907a       etcd-functional-699837
	c9537e09fe59d       d85eea91cc41d       About a minute ago   Exited              kube-apiserver            4                   c3e3744dc769f       kube-apiserver-functional-699837
	ab71ff54628ca       21d34a2aeacf5       4 minutes ago        Running             kube-scheduler            0                   3a21deea3bd6d       kube-scheduler-functional-699837
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:14:05.941044   25270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:05.941879   25270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:05.943433   25270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:05.943804   25270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:05.945325   25270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000488] IPv4: martian source 10.244.0.33 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[  +0.000590] IPv4: martian source 10.244.0.33 from 10.244.0.7, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ee 17 d6 72 58 d4 08 06
	[ +20.425373] IPv4: martian source 10.244.0.36 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 2e 04 ae c5 a3 08 06
	[  +0.708699] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[Aug 4 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 4d a6 d6 4c 9f 08 06
	[Aug 4 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 38 7f 58 31 63 08 06
	[ +30.193533] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 b7 61 9c 47 84 08 06
	[Aug 4 08:45] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a d0 26 e8 7c d1 08 06
	[Aug 4 08:46] FS-Cache: Duplicate cookie detected
	[  +0.004807] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006832] FS-Cache: O-cookie d=000000003739c6e4{9P.session} n=000000001b482ea5
	[  +0.007607] FS-Cache: O-key=[10] '34333332323039333239'
	[  +0.005436] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006682] FS-Cache: N-cookie d=000000003739c6e4{9P.session} n=00000000e0b3994b
	[  +0.007609] FS-Cache: N-key=[10] '34333332323039333239'
	[  +5.882110] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 55 4a ac 47 cd 08 06
	
	
	==> etcd [0e5a036fd865] <==
	flag provided but not defined: -proxy-refresh-interval
	Usage:
	
	  etcd [flags]
	    Start an etcd server.
	
	  etcd --version
	    Show the version of etcd.
	
	  etcd -h | --help
	    Show the help information about etcd.
	
	  etcd --config-file
	    Path to the server configuration file. Note that if a configuration file is provided, other command line flags and environment variables will be ignored.
	
	  etcd gateway
	    Run the stateless pass-through etcd TCP connection forwarding proxy.
	
	  etcd grpc-proxy
	    Run the stateless etcd v3 gRPC L7 reverse proxy.
	
	
	
	==> kernel <==
	 09:14:05 up 1 day, 17:55,  0 users,  load average: 0.03, 0.09, 0.23
	Linux functional-699837 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [c9537e09fe59] <==
	W0804 09:13:01.062968       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0804 09:13:01.063080       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 09:13:01.064364       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 09:13:01.072243       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 09:13:01.077057       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceAutoProvision,NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 09:13:01.077076       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 09:13:01.077355       1 instance.go:232] Using reconciler: lease
	W0804 09:13:01.078152       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0804 09:13:01.078183       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.064385       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.064386       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.079065       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.556302       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.764969       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.836811       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:05.764628       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:06.271423       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:06.558313       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:09.120366       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:10.991226       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:11.100603       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:15.082522       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:16.616538       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:18.138507       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 09:13:21.078676       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [19b815a4b1b2] <==
	I0804 09:13:09.096379       1 serving.go:386] Generated self-signed cert in-memory
	I0804 09:13:09.725784       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 09:13:09.725823       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 09:13:09.727763       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 09:13:09.727831       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 09:13:09.728078       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 09:13:09.728188       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 09:13:29.730720       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-scheduler [ab71ff54628c] <==
	E0804 09:12:58.837427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:13:11.033725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 09:13:11.088932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:13:15.121795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:13:17.161677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:13:18.600381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43972->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:44002->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43978->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43960->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:13:22.083981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:44032->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:13:22.084172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43986->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:13:22.585066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 09:13:26.210416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 09:13:27.295821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 09:13:34.688522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 09:13:37.031049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:13:45.713447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:13:49.362723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 09:13:54.296326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:13:55.421665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:13:56.863265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:13:57.488174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:13:59.236047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:14:03.694972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	
	
	==> kubelet <==
	Aug 04 09:13:48 functional-699837 kubelet[23032]: I0804 09:13:48.644257   23032 scope.go:117] "RemoveContainer" containerID="19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec"
	Aug 04 09:13:48 functional-699837 kubelet[23032]: E0804 09:13:48.644416   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-699837_kube-system(ed0b2fd0bf6ad62500e8494ab79d1a1a)\"" pod="kube-system/kube-controller-manager-functional-699837" podUID="ed0b2fd0bf6ad62500e8494ab79d1a1a"
	Aug 04 09:13:50 functional-699837 kubelet[23032]: I0804 09:13:50.631932   23032 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:13:50 functional-699837 kubelet[23032]: E0804 09:13:50.632326   23032 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:13:51 functional-699837 kubelet[23032]: E0804 09:13:51.608055   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:13:52 functional-699837 kubelet[23032]: E0804 09:13:52.692421   23032 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	Aug 04 09:13:55 functional-699837 kubelet[23032]: E0804 09:13:55.349669   23032 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588548cf9cd04c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,LastTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:13:55 functional-699837 kubelet[23032]: E0804 09:13:55.643765   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:13:55 functional-699837 kubelet[23032]: I0804 09:13:55.643863   23032 scope.go:117] "RemoveContainer" containerID="c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e"
	Aug 04 09:13:55 functional-699837 kubelet[23032]: E0804 09:13:55.644036   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-699837_kube-system(cc94200f18453b93e8d420d475923a00)\"" pod="kube-system/kube-apiserver-functional-699837" podUID="cc94200f18453b93e8d420d475923a00"
	Aug 04 09:13:57 functional-699837 kubelet[23032]: I0804 09:13:57.633505   23032 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:13:57 functional-699837 kubelet[23032]: E0804 09:13:57.633818   23032 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:13:57 functional-699837 kubelet[23032]: E0804 09:13:57.643831   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:13:57 functional-699837 kubelet[23032]: I0804 09:13:57.643903   23032 scope.go:117] "RemoveContainer" containerID="0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1"
	Aug 04 09:13:57 functional-699837 kubelet[23032]: E0804 09:13:57.644026   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-functional-699837_kube-system(33b890b5c0b95f8eaa124c566a17ff33)\"" pod="kube-system/etcd-functional-699837" podUID="33b890b5c0b95f8eaa124c566a17ff33"
	Aug 04 09:13:58 functional-699837 kubelet[23032]: E0804 09:13:58.609095   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:13:59 functional-699837 kubelet[23032]: E0804 09:13:59.432444   23032 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Aug 04 09:14:02 functional-699837 kubelet[23032]: E0804 09:14:02.693365   23032 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: E0804 09:14:03.644142   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: I0804 09:14:03.644222   23032 scope.go:117] "RemoveContainer" containerID="19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: E0804 09:14:03.644365   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-699837_kube-system(ed0b2fd0bf6ad62500e8494ab79d1a1a)\"" pod="kube-system/kube-controller-manager-functional-699837" podUID="ed0b2fd0bf6ad62500e8494ab79d1a1a"
	Aug 04 09:14:04 functional-699837 kubelet[23032]: I0804 09:14:04.635636   23032 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:14:04 functional-699837 kubelet[23032]: E0804 09:14:04.636090   23032 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:14:05 functional-699837 kubelet[23032]: E0804 09:14:05.350524   23032 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588548cf9cd04c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,LastTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:14:05 functional-699837 kubelet[23032]: E0804 09:14:05.610218   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837: exit status 2 (266.365891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-699837" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/ComponentHealth (1.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-699837 apply -f testdata/invalidsvc.yaml
functional_test.go:2338: (dbg) Non-zero exit: kubectl --context functional-699837 apply -f testdata/invalidsvc.yaml: exit status 1 (57.351134ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2340: kubectl --context functional-699837 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DashboardCmd (1.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-699837 --alsologtostderr -v=1]
functional_test.go:935: output didn't produce a URL
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-699837 --alsologtostderr -v=1] ...
functional_test.go:927: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-699837 --alsologtostderr -v=1] stdout:
functional_test.go:927: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-699837 --alsologtostderr -v=1] stderr:
I0804 09:14:13.240017 1684792 out.go:345] Setting OutFile to fd 1 ...
I0804 09:14:13.240154 1684792 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 09:14:13.240167 1684792 out.go:358] Setting ErrFile to fd 2...
I0804 09:14:13.240176 1684792 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 09:14:13.240507 1684792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
I0804 09:14:13.240863 1684792 mustload.go:65] Loading cluster: functional-699837
I0804 09:14:13.241427 1684792 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
I0804 09:14:13.242026 1684792 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
I0804 09:14:13.260188 1684792 host.go:66] Checking if "functional-699837" exists ...
I0804 09:14:13.260454 1684792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0804 09:14:13.311849 1684792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-08-04 09:14:13.302244063 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0804 09:14:13.312019 1684792 api_server.go:166] Checking apiserver status ...
I0804 09:14:13.312082 1684792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 09:14:13.312131 1684792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
I0804 09:14:13.329401 1684792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
W0804 09:14:13.432229 1684792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0804 09:14:13.434148 1684792 out.go:177] * The control-plane node functional-699837 apiserver is not running: (state=Stopped)
I0804 09:14:13.435371 1684792 out.go:177]   To start a cluster, run: "minikube start -p functional-699837"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-699837
helpers_test.go:235: (dbg) docker inspect functional-699837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	        "Created": "2025-08-04T08:46:45.45274172Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1645232,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T08:46:45.480784715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hosts",
	        "LogPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef-json.log",
	        "Name": "/functional-699837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-699837:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-699837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	                "LowerDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/merged",
	                "UpperDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/diff",
	                "WorkDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-699837",
	                "Source": "/var/lib/docker/volumes/functional-699837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-699837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-699837",
	                "name.minikube.sigs.k8s.io": "functional-699837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "28a81d3856c88da8c1d30d5c1cccd74ba2a899c3397b78caf0ac9da484142038",
	            "SandboxKey": "/var/run/docker/netns/28a81d3856c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-699837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:c5:9a:18:f2:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "763070d9e7bba0803db69bf71eb608d56921d0bfd4c71a1d39d0701f7372b87c",
	                    "EndpointID": "83493e8c17b59326d8c479c2c0d7a5ded2cae3362a881c1ce8347b3f751ead15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-699837",
	                        "c369b96e23d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837: exit status 2 (355.556615ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 logs -n 25
helpers_test.go:252: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                                  ARGS                                                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-699837 ssh sudo systemctl is-active crio                                                                                                                    │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh       │ functional-699837 ssh -- ls -la /mount-9p                                                                                                                              │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh       │ functional-699837 ssh cat /mount-9p/test-1754298848740038253                                                                                                           │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh       │ functional-699837 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                                       │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh       │ functional-699837 ssh sudo umount -f /mount-9p                                                                                                                         │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ mount     │ -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdspecific-port2621928662/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh       │ functional-699837 ssh findmnt -T /mount-9p | grep 9p                                                                                                                   │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ start     │ -p functional-699837 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0                        │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ image     │ functional-699837 image load --daemon kicbase/echo-server:functional-699837 --alsologtostderr                                                                          │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh       │ functional-699837 ssh findmnt -T /mount-9p | grep 9p                                                                                                                   │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh       │ functional-699837 ssh -- ls -la /mount-9p                                                                                                                              │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ start     │ -p functional-699837 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0                        │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ image     │ functional-699837 image ls                                                                                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh       │ functional-699837 ssh sudo umount -f /mount-9p                                                                                                                         │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ start     │ -p functional-699837 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0                                  │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ image     │ functional-699837 image load --daemon kicbase/echo-server:functional-699837 --alsologtostderr                                                                          │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ dashboard │ --url --port 36195 -p functional-699837 --alsologtostderr -v=1                                                                                                         │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ mount     │ -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdVerifyCleanup352349839/001:/mount1 --alsologtostderr -v=1                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh       │ functional-699837 ssh findmnt -T /mount1                                                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ mount     │ -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdVerifyCleanup352349839/001:/mount2 --alsologtostderr -v=1                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ mount     │ -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdVerifyCleanup352349839/001:/mount3 --alsologtostderr -v=1                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh       │ functional-699837 ssh sudo cat /etc/ssl/certs/1582690.pem                                                                                                              │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image     │ functional-699837 image ls                                                                                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh       │ functional-699837 ssh findmnt -T /mount2                                                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh       │ functional-699837 ssh sudo cat /usr/share/ca-certificates/1582690.pem                                                                                                  │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	└───────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 09:14:12
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 09:14:12.992327 1684525 out.go:345] Setting OutFile to fd 1 ...
	I0804 09:14:12.992632 1684525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:14:12.992647 1684525 out.go:358] Setting ErrFile to fd 2...
	I0804 09:14:12.992653 1684525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:14:12.992985 1684525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 09:14:12.993729 1684525 out.go:352] Setting JSON to false
	I0804 09:14:12.995013 1684525 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":150942,"bootTime":1754147911,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 09:14:12.995107 1684525 start.go:140] virtualization: kvm guest
	I0804 09:14:12.997234 1684525 out.go:177] * [functional-699837] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 09:14:12.998435 1684525 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 09:14:12.998495 1684525 notify.go:220] Checking for updates...
	I0804 09:14:13.000523 1684525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 09:14:13.001833 1684525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 09:14:13.003094 1684525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 09:14:13.004247 1684525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 09:14:13.005485 1684525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 09:14:13.006929 1684525 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:14:13.007672 1684525 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 09:14:13.037008 1684525 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 09:14:13.037170 1684525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:14:13.108391 1684525 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:58 SystemTime:2025-08-04 09:14:13.099283492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:14:13.108492 1684525 docker.go:318] overlay module found
	I0804 09:14:13.109830 1684525 out.go:177] * Using the docker driver based on existing profile
	I0804 09:14:13.110806 1684525 start.go:304] selected driver: docker
	I0804 09:14:13.110821 1684525 start.go:918] validating driver "docker" against &{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:14:13.110918 1684525 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 09:14:13.111010 1684525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:14:13.174998 1684525 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:57 SystemTime:2025-08-04 09:14:13.163491877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:14:13.175928 1684525 cni.go:84] Creating CNI manager for ""
	I0804 09:14:13.176003 1684525 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:14:13.176058 1684525 start.go:348] cluster config:
	{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:14:13.178622 1684525 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.787995733Z" level=info msg="ignoring event" container=f1bd416cdc841c08268e4a5cc39ad5a59cc0a90b637768c23bba55fc61dfe5c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.834457529Z" level=info msg="ignoring event" container=e5c110c6a30cdc8999b8b044af4d1ddbb8d18f91cb064a1ebe54d22157751829 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.885743027Z" level=info msg="ignoring event" container=e13433a1e498749e89b61d95e4e808ac592ff0f1590fa6a6796cb547fa62b353 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.942900152Z" level=info msg="ignoring event" container=0dbe96ba02a76e8c83b519e0f5e45430250b1274660db94c7535b17780b8b6a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.996443176Z" level=info msg="ignoring event" container=65a02a714ffa74a76d877f2f692a10085ec7c8de0a017440b9efab00ad27e971 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d4d4b2be5907ada8d86373ea4112563c2759616d61b4a3818a35c5e172d53a14/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c3e3744dc769f21f2dd24654e1beecb6bfea7f8fdbb934aece5c0de776222793/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b2655ec5482c692bf93620fb4f296ae1f6e6322e8ac4d9bc5b6eb4deb7959758/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3a21deea3bd6d0ed2e1f870c1f36ae32ec63d20d02b5d6f7c0acfdbaa8f8b941/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 dockerd[11071]: time="2025-08-04T09:10:03.575810667Z" level=info msg="ignoring event" container=b425fd9606261cc933d38c743338a7166df00b74150ec90a06efaa88ed8fc7b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:04 functional-699837 dockerd[11071]: time="2025-08-04T09:10:04.004048987Z" level=info msg="ignoring event" container=6405868ef96be39062f80dc7747b60785a54bddc511237239054e6857dfb60f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:25 functional-699837 dockerd[11071]: time="2025-08-04T09:10:25.604145123Z" level=info msg="ignoring event" container=fa805a11775898f3d55fe7aac1621ef34f65e4c5d265b91d14f1aac398eb73e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:25 functional-699837 dockerd[11071]: time="2025-08-04T09:10:25.760949608Z" level=info msg="ignoring event" container=f96509d0b4a5c44670e00704a788094c91d7b771e339e28bcbb4c72c5b3337f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:46 functional-699837 dockerd[11071]: time="2025-08-04T09:10:46.592786531Z" level=info msg="ignoring event" container=f4baa19e4e176c92972f5c522b74a59ccb787659ec18793a2507e5f3eb51c18e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:47 functional-699837 dockerd[11071]: time="2025-08-04T09:10:47.616507681Z" level=info msg="ignoring event" container=25c1c03e2a156d302903662e106257ad86e1a932fc60405f41533a9012305264 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:55 functional-699837 dockerd[11071]: time="2025-08-04T09:10:55.761109664Z" level=info msg="ignoring event" container=c26a4a47aeb6e114017bda7b18b81d29e691be9cb646b2d0563767522b4243e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:59 functional-699837 dockerd[11071]: time="2025-08-04T09:10:59.048340949Z" level=info msg="ignoring event" container=5782c2a66cdd131809b7afdb2a669ecdc6104e397476ab6668c189dd853d9135 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:23 functional-699837 dockerd[11071]: time="2025-08-04T09:11:23.680443620Z" level=info msg="ignoring event" container=8b79556a690891c36a658f03ea970153fdb49c95eddd24f9241c3648decbc9ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:34 functional-699837 dockerd[11071]: time="2025-08-04T09:11:34.704315507Z" level=info msg="ignoring event" container=b2c8622eb896520d559e06ff8656f4690c8183e99d4c298a76889fb2e1f0ebf7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:41 functional-699837 dockerd[11071]: time="2025-08-04T09:11:41.762186466Z" level=info msg="ignoring event" container=bc29e58366f3b736cc21b6d0cc45970040b105936cf9045300d75e3e3fc5a723 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:12:15 functional-699837 dockerd[11071]: time="2025-08-04T09:12:15.453114207Z" level=info msg="ignoring event" container=9fa5f5eeba93beb44bb9b23ec48553aaea94d0f30b5d2c53f2f15b77b1d7977c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:12:26 functional-699837 dockerd[11071]: time="2025-08-04T09:12:26.472269528Z" level=info msg="ignoring event" container=91a0d13be39f38898491d381b24367c6e8aed57bbdcaf093ac956972d4c853ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:07 functional-699837 dockerd[11071]: time="2025-08-04T09:13:07.763715484Z" level=info msg="ignoring event" container=0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:21 functional-699837 dockerd[11071]: time="2025-08-04T09:13:21.094277794Z" level=info msg="ignoring event" container=c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:29 functional-699837 dockerd[11071]: time="2025-08-04T09:13:29.764267638Z" level=info msg="ignoring event" container=19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	19b815a4b1b28       9ad783615e1bc       About a minute ago   Exited              kube-controller-manager   4                   b2655ec5482c6       kube-controller-manager-functional-699837
	0e5a036fd8651       1e30c0b1e9b99       About a minute ago   Exited              etcd                      5                   d4d4b2be5907a       etcd-functional-699837
	c9537e09fe59d       d85eea91cc41d       About a minute ago   Exited              kube-apiserver            4                   c3e3744dc769f       kube-apiserver-functional-699837
	ab71ff54628ca       21d34a2aeacf5       4 minutes ago        Running             kube-scheduler            0                   3a21deea3bd6d       kube-scheduler-functional-699837
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:14:14.473805   26713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:14.474455   26713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:14.476138   26713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:14.476609   26713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:14.478034   26713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000488] IPv4: martian source 10.244.0.33 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[  +0.000590] IPv4: martian source 10.244.0.33 from 10.244.0.7, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ee 17 d6 72 58 d4 08 06
	[ +20.425373] IPv4: martian source 10.244.0.36 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 2e 04 ae c5 a3 08 06
	[  +0.708699] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[Aug 4 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 4d a6 d6 4c 9f 08 06
	[Aug 4 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 38 7f 58 31 63 08 06
	[ +30.193533] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 b7 61 9c 47 84 08 06
	[Aug 4 08:45] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a d0 26 e8 7c d1 08 06
	[Aug 4 08:46] FS-Cache: Duplicate cookie detected
	[  +0.004807] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006832] FS-Cache: O-cookie d=000000003739c6e4{9P.session} n=000000001b482ea5
	[  +0.007607] FS-Cache: O-key=[10] '34333332323039333239'
	[  +0.005436] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006682] FS-Cache: N-cookie d=000000003739c6e4{9P.session} n=00000000e0b3994b
	[  +0.007609] FS-Cache: N-key=[10] '34333332323039333239'
	[  +5.882110] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 55 4a ac 47 cd 08 06
	
	
	==> etcd [0e5a036fd865] <==
	flag provided but not defined: -proxy-refresh-interval
	Usage:
	
	  etcd [flags]
	    Start an etcd server.
	
	  etcd --version
	    Show the version of etcd.
	
	  etcd -h | --help
	    Show the help information about etcd.
	
	  etcd --config-file
	    Path to the server configuration file. Note that if a configuration file is provided, other command line flags and environment variables will be ignored.
	
	  etcd gateway
	    Run the stateless pass-through etcd TCP connection forwarding proxy.
	
	  etcd grpc-proxy
	    Run the stateless etcd v3 gRPC L7 reverse proxy.
	
	
	
	==> kernel <==
	 09:14:14 up 1 day, 17:55,  0 users,  load average: 0.42, 0.17, 0.25
	Linux functional-699837 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [c9537e09fe59] <==
	W0804 09:13:01.062968       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0804 09:13:01.063080       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 09:13:01.064364       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 09:13:01.072243       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 09:13:01.077057       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceAutoProvision,NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 09:13:01.077076       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 09:13:01.077355       1 instance.go:232] Using reconciler: lease
	W0804 09:13:01.078152       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0804 09:13:01.078183       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.064385       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.064386       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.079065       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.556302       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.764969       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.836811       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:05.764628       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:06.271423       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:06.558313       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:09.120366       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:10.991226       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:11.100603       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:15.082522       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:16.616538       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:18.138507       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 09:13:21.078676       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [19b815a4b1b2] <==
	I0804 09:13:09.096379       1 serving.go:386] Generated self-signed cert in-memory
	I0804 09:13:09.725784       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 09:13:09.725823       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 09:13:09.727763       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 09:13:09.727831       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 09:13:09.728078       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 09:13:09.728188       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 09:13:29.730720       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-scheduler [ab71ff54628c] <==
	E0804 09:13:15.121795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:13:17.161677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:13:18.600381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43972->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:44002->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43978->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43960->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:13:22.083981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:44032->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:13:22.084172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43986->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:13:22.585066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 09:13:26.210416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 09:13:27.295821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 09:13:34.688522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 09:13:37.031049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:13:45.713447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:13:49.362723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 09:13:54.296326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:13:55.421665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:13:56.863265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:13:57.488174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:13:59.236047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:14:03.694972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 09:14:07.280269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:14:08.128547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:14:10.109602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	
	
	==> kubelet <==
	Aug 04 09:13:58 functional-699837 kubelet[23032]: E0804 09:13:58.609095   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:13:59 functional-699837 kubelet[23032]: E0804 09:13:59.432444   23032 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Aug 04 09:14:02 functional-699837 kubelet[23032]: E0804 09:14:02.693365   23032 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: E0804 09:14:03.644142   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: I0804 09:14:03.644222   23032 scope.go:117] "RemoveContainer" containerID="19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: E0804 09:14:03.644365   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-699837_kube-system(ed0b2fd0bf6ad62500e8494ab79d1a1a)\"" pod="kube-system/kube-controller-manager-functional-699837" podUID="ed0b2fd0bf6ad62500e8494ab79d1a1a"
	Aug 04 09:14:04 functional-699837 kubelet[23032]: I0804 09:14:04.635636   23032 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:14:04 functional-699837 kubelet[23032]: E0804 09:14:04.636090   23032 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:14:05 functional-699837 kubelet[23032]: E0804 09:14:05.350524   23032 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588548cf9cd04c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,LastTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:14:05 functional-699837 kubelet[23032]: E0804 09:14:05.610218   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:14:08 functional-699837 kubelet[23032]: E0804 09:14:08.644074   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:08 functional-699837 kubelet[23032]: I0804 09:14:08.644186   23032 scope.go:117] "RemoveContainer" containerID="0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1"
	Aug 04 09:14:08 functional-699837 kubelet[23032]: E0804 09:14:08.644380   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-functional-699837_kube-system(33b890b5c0b95f8eaa124c566a17ff33)\"" pod="kube-system/etcd-functional-699837" podUID="33b890b5c0b95f8eaa124c566a17ff33"
	Aug 04 09:14:10 functional-699837 kubelet[23032]: E0804 09:14:10.643561   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:10 functional-699837 kubelet[23032]: I0804 09:14:10.643671   23032 scope.go:117] "RemoveContainer" containerID="c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e"
	Aug 04 09:14:10 functional-699837 kubelet[23032]: E0804 09:14:10.643844   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-699837_kube-system(cc94200f18453b93e8d420d475923a00)\"" pod="kube-system/kube-apiserver-functional-699837" podUID="cc94200f18453b93e8d420d475923a00"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: E0804 09:14:11.218396   23032 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: I0804 09:14:11.637647   23032 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: E0804 09:14:11.638029   23032 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: E0804 09:14:11.997440   23032 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Aug 04 09:14:12 functional-699837 kubelet[23032]: E0804 09:14:12.610748   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:14:12 functional-699837 kubelet[23032]: E0804 09:14:12.694152   23032 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	Aug 04 09:14:14 functional-699837 kubelet[23032]: E0804 09:14:14.644071   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:14 functional-699837 kubelet[23032]: I0804 09:14:14.644181   23032 scope.go:117] "RemoveContainer" containerID="19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec"
	Aug 04 09:14:14 functional-699837 kubelet[23032]: E0804 09:14:14.644371   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-699837_kube-system(ed0b2fd0bf6ad62500e8494ab79d1a1a)\"" pod="kube-system/kube-controller-manager-functional-699837" podUID="ed0b2fd0bf6ad62500e8494ab79d1a1a"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837: exit status 2 (300.162578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-699837" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DashboardCmd (1.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/StatusCmd (2.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 status
functional_test.go:871: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-699837 status: exit status 2 (296.942021ms)

                                                
                                                
-- stdout --
	functional-699837
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:873: failed to run minikube status. args "out/minikube-linux-amd64 -p functional-699837 status" : exit status 2
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:877: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-699837 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (318.450142ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:879: failed to run minikube status with custom format: args "out/minikube-linux-amd64 -p functional-699837 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 status -o json
functional_test.go:889: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-699837 status -o json: exit status 2 (312.520987ms)

                                                
                                                
-- stdout --
	{"Name":"functional-699837","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:891: failed to run minikube status with json output. args "out/minikube-linux-amd64 -p functional-699837 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-699837
helpers_test.go:235: (dbg) docker inspect functional-699837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	        "Created": "2025-08-04T08:46:45.45274172Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1645232,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T08:46:45.480784715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hosts",
	        "LogPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef-json.log",
	        "Name": "/functional-699837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-699837:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-699837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	                "LowerDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/merged",
	                "UpperDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/diff",
	                "WorkDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-699837",
	                "Source": "/var/lib/docker/volumes/functional-699837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-699837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-699837",
	                "name.minikube.sigs.k8s.io": "functional-699837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "28a81d3856c88da8c1d30d5c1cccd74ba2a899c3397b78caf0ac9da484142038",
	            "SandboxKey": "/var/run/docker/netns/28a81d3856c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-699837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:c5:9a:18:f2:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "763070d9e7bba0803db69bf71eb608d56921d0bfd4c71a1d39d0701f7372b87c",
	                    "EndpointID": "83493e8c17b59326d8c479c2c0d7a5ded2cae3362a881c1ce8347b3f751ead15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-699837",
	                        "c369b96e23d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837: exit status 2 (299.119962ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 logs -n 25
helpers_test.go:252: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ license │                                                                                                                                                                 │ minikube          │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ config  │ functional-699837 config unset cpus                                                                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ config  │ functional-699837 config get cpus                                                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ service │ functional-699837 service list                                                                                                                                  │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ config  │ functional-699837 config set cpus 2                                                                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ config  │ functional-699837 config get cpus                                                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ config  │ functional-699837 config unset cpus                                                                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh     │ functional-699837 ssh -n functional-699837 sudo cat /home/docker/cp-test.txt                                                                                    │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ config  │ functional-699837 config get cpus                                                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ service │ functional-699837 service list -o json                                                                                                                          │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ cp      │ functional-699837 cp functional-699837:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelCpCmd4180608053/001/cp-test.txt │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ mount   │ -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdany-port3821011156/001:/mount-9p --alsologtostderr -v=1            │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh     │ functional-699837 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ service │ functional-699837 service --namespace=default --https --url hello-node                                                                                          │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh     │ functional-699837 ssh -n functional-699837 sudo cat /home/docker/cp-test.txt                                                                                    │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ service │ functional-699837 service hello-node --url --format={{.IP}}                                                                                                     │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ cp      │ functional-699837 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                       │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ service │ functional-699837 service hello-node --url                                                                                                                      │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh     │ functional-699837 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh     │ functional-699837 ssh -n functional-699837 sudo cat /tmp/does/not/exist/cp-test.txt                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh     │ functional-699837 ssh sudo systemctl is-active crio                                                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh     │ functional-699837 ssh -- ls -la /mount-9p                                                                                                                       │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh     │ functional-699837 ssh cat /mount-9p/test-1754298848740038253                                                                                                    │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh     │ functional-699837 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                                │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh     │ functional-699837 ssh sudo umount -f /mount-9p                                                                                                                  │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 09:01:42
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 09:01:42.156481 1661480 out.go:345] Setting OutFile to fd 1 ...
	I0804 09:01:42.156707 1661480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:01:42.156710 1661480 out.go:358] Setting ErrFile to fd 2...
	I0804 09:01:42.156714 1661480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:01:42.156897 1661480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 09:01:42.157507 1661480 out.go:352] Setting JSON to false
	I0804 09:01:42.158437 1661480 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":150191,"bootTime":1754147911,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 09:01:42.158562 1661480 start.go:140] virtualization: kvm guest
	I0804 09:01:42.160356 1661480 out.go:177] * [functional-699837] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 09:01:42.161427 1661480 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 09:01:42.161472 1661480 notify.go:220] Checking for updates...
	I0804 09:01:42.163278 1661480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 09:01:42.164206 1661480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 09:01:42.165120 1661480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 09:01:42.165996 1661480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 09:01:42.166919 1661480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 09:01:42.168183 1661480 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:01:42.168274 1661480 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 09:01:42.191254 1661480 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 09:01:42.191357 1661480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:01:42.241393 1661480 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:56 SystemTime:2025-08-04 09:01:42.232515248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:01:42.241500 1661480 docker.go:318] overlay module found
	I0804 09:01:42.242889 1661480 out.go:177] * Using the docker driver based on existing profile
	I0804 09:01:42.244074 1661480 start.go:304] selected driver: docker
	I0804 09:01:42.244080 1661480 start.go:918] validating driver "docker" against &{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:01:42.244146 1661480 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 09:01:42.244220 1661480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:01:42.294650 1661480 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:56 SystemTime:2025-08-04 09:01:42.286637693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:01:42.295228 1661480 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 09:01:42.295248 1661480 cni.go:84] Creating CNI manager for ""
	I0804 09:01:42.295307 1661480 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:01:42.295353 1661480 start.go:348] cluster config:
	{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:01:42.296893 1661480 out.go:177] * Starting "functional-699837" primary control-plane node in "functional-699837" cluster
	I0804 09:01:42.297909 1661480 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 09:01:42.298895 1661480 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 09:01:42.299795 1661480 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 09:01:42.299827 1661480 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 09:01:42.299834 1661480 cache.go:56] Caching tarball of preloaded images
	I0804 09:01:42.299892 1661480 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 09:01:42.299912 1661480 preload.go:172] Found /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 09:01:42.299918 1661480 cache.go:59] Finished verifying existence of preloaded tar for v1.34.0-beta.0 on docker
	I0804 09:01:42.300000 1661480 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/config.json ...
	I0804 09:01:42.318895 1661480 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 09:01:42.318906 1661480 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 09:01:42.318921 1661480 cache.go:230] Successfully downloaded all kic artifacts
	I0804 09:01:42.318949 1661480 start.go:360] acquireMachinesLock for functional-699837: {Name:mkeddb8e244284f14cfc07327f464823de65cf67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 09:01:42.319013 1661480 start.go:364] duration metric: took 47.797µs to acquireMachinesLock for "functional-699837"
	I0804 09:01:42.319031 1661480 start.go:96] Skipping create...Using existing machine configuration
	I0804 09:01:42.319035 1661480 fix.go:54] fixHost starting: 
	I0804 09:01:42.319241 1661480 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
	I0804 09:01:42.335260 1661480 fix.go:112] recreateIfNeeded on functional-699837: state=Running err=<nil>
	W0804 09:01:42.335277 1661480 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 09:01:42.336775 1661480 out.go:177] * Updating the running docker "functional-699837" container ...
	I0804 09:01:42.337763 1661480 machine.go:93] provisionDockerMachine start ...
	I0804 09:01:42.337866 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:42.354303 1661480 main.go:141] libmachine: Using SSH client type: native
	I0804 09:01:42.354606 1661480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 09:01:42.354616 1661480 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 09:01:42.480475 1661480 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-699837
	
	I0804 09:01:42.480497 1661480 ubuntu.go:169] provisioning hostname "functional-699837"
	I0804 09:01:42.480554 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:42.497934 1661480 main.go:141] libmachine: Using SSH client type: native
	I0804 09:01:42.498143 1661480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 09:01:42.498149 1661480 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-699837 && echo "functional-699837" | sudo tee /etc/hostname
	I0804 09:01:42.631472 1661480 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-699837
	
	I0804 09:01:42.631543 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:42.651771 1661480 main.go:141] libmachine: Using SSH client type: native
	I0804 09:01:42.651968 1661480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 09:01:42.651979 1661480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-699837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-699837/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-699837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 09:01:42.773172 1661480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 09:01:42.773193 1661480 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 09:01:42.773212 1661480 ubuntu.go:177] setting up certificates
	I0804 09:01:42.773223 1661480 provision.go:84] configureAuth start
	I0804 09:01:42.773312 1661480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-699837
	I0804 09:01:42.791415 1661480 provision.go:143] copyHostCerts
	I0804 09:01:42.791465 1661480 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 09:01:42.791472 1661480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 09:01:42.791531 1661480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 09:01:42.791616 1661480 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 09:01:42.791620 1661480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 09:01:42.791646 1661480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 09:01:42.791714 1661480 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 09:01:42.791716 1661480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 09:01:42.791734 1661480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 09:01:42.791789 1661480 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.functional-699837 san=[127.0.0.1 192.168.49.2 functional-699837 localhost minikube]
	I0804 09:01:43.143340 1661480 provision.go:177] copyRemoteCerts
	I0804 09:01:43.143389 1661480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 09:01:43.143445 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:43.161220 1661480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 09:01:43.249861 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 09:01:43.271347 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 09:01:43.292377 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 09:01:43.313416 1661480 provision.go:87] duration metric: took 540.180755ms to configureAuth
	I0804 09:01:43.313435 1661480 ubuntu.go:193] setting minikube options for container-runtime
	I0804 09:01:43.313593 1661480 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:01:43.313633 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:43.330273 1661480 main.go:141] libmachine: Using SSH client type: native
	I0804 09:01:43.330483 1661480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 09:01:43.330489 1661480 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 09:01:43.457453 1661480 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 09:01:43.457467 1661480 ubuntu.go:71] root file system type: overlay
	I0804 09:01:43.457576 1661480 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 09:01:43.457634 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:43.474934 1661480 main.go:141] libmachine: Using SSH client type: native
	I0804 09:01:43.475149 1661480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 09:01:43.475211 1661480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 09:01:43.609712 1661480 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 09:01:43.609798 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:43.627690 1661480 main.go:141] libmachine: Using SSH client type: native
	I0804 09:01:43.627960 1661480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0804 09:01:43.627979 1661480 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 09:01:43.753925 1661480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 09:01:43.753943 1661480 machine.go:96] duration metric: took 1.416170869s to provisionDockerMachine
	I0804 09:01:43.753958 1661480 start.go:293] postStartSetup for "functional-699837" (driver="docker")
	I0804 09:01:43.753972 1661480 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 09:01:43.754026 1661480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 09:01:43.754070 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:43.771133 1661480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 09:01:43.861861 1661480 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 09:01:43.864855 1661480 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 09:01:43.864888 1661480 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 09:01:43.864895 1661480 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 09:01:43.864901 1661480 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 09:01:43.864911 1661480 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 09:01:43.864956 1661480 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 09:01:43.865026 1661480 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 09:01:43.865096 1661480 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts -> hosts in /etc/test/nested/copy/1582690
	I0804 09:01:43.865126 1661480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1582690
	I0804 09:01:43.872832 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 09:01:43.894143 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts --> /etc/test/nested/copy/1582690/hosts (40 bytes)
	I0804 09:01:43.915287 1661480 start.go:296] duration metric: took 161.311477ms for postStartSetup
	I0804 09:01:43.915357 1661480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 09:01:43.915392 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:43.932959 1661480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 09:01:44.018261 1661480 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 09:01:44.022893 1661480 fix.go:56] duration metric: took 1.703852119s for fixHost
	I0804 09:01:44.022909 1661480 start.go:83] releasing machines lock for "functional-699837", held for 1.703889075s
	I0804 09:01:44.022981 1661480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-699837
	I0804 09:01:44.039826 1661480 ssh_runner.go:195] Run: cat /version.json
	I0804 09:01:44.039861 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:44.039893 1661480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 09:01:44.039958 1661480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
	I0804 09:01:44.056968 1661480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 09:01:44.057018 1661480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
	I0804 09:01:44.215860 1661480 ssh_runner.go:195] Run: systemctl --version
	I0804 09:01:44.220163 1661480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 09:01:44.224284 1661480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 09:01:44.241133 1661480 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 09:01:44.241191 1661480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 09:01:44.249056 1661480 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 09:01:44.249074 1661480 start.go:495] detecting cgroup driver to use...
	I0804 09:01:44.249111 1661480 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 09:01:44.249262 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 09:01:44.263581 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:44.682033 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 09:01:44.691892 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 09:01:44.700781 1661480 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 09:01:44.700830 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 09:01:44.709728 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 09:01:44.718687 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 09:01:44.727121 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 09:01:44.735358 1661480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 09:01:44.743204 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 09:01:44.751683 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 09:01:44.760146 1661480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 09:01:44.768590 1661480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 09:01:44.775769 1661480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 09:01:44.782939 1661480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:01:44.861305 1661480 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 09:01:45.079189 1661480 start.go:495] detecting cgroup driver to use...
	I0804 09:01:45.079234 1661480 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 09:01:45.079293 1661480 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 09:01:45.091099 1661480 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 09:01:45.091152 1661480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 09:01:45.102759 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 09:01:45.118200 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:45.531236 1661480 ssh_runner.go:195] Run: which cri-dockerd
	I0804 09:01:45.535092 1661480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 09:01:45.543037 1661480 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 09:01:45.558759 1661480 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 09:01:45.636615 1661480 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 09:01:45.710742 1661480 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 09:01:45.710843 1661480 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 09:01:45.726627 1661480 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 09:01:45.735943 1661480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:01:45.815264 1661480 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 09:01:46.120565 1661480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 09:01:46.133038 1661480 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0804 09:01:46.150796 1661480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 09:01:46.160527 1661480 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 09:01:46.221390 1661480 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 09:01:46.295075 1661480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:01:46.370922 1661480 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 09:01:46.383433 1661480 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 09:01:46.393933 1661480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:01:46.488903 1661480 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 09:01:46.549986 1661480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 09:01:46.560540 1661480 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 09:01:46.560600 1661480 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 09:01:46.563751 1661480 start.go:563] Will wait 60s for crictl version
	I0804 09:01:46.563795 1661480 ssh_runner.go:195] Run: which crictl
	I0804 09:01:46.566758 1661480 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 09:01:46.597980 1661480 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 09:01:46.598027 1661480 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 09:01:46.620697 1661480 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 09:01:46.645762 1661480 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 09:01:46.645842 1661480 cli_runner.go:164] Run: docker network inspect functional-699837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 09:01:46.662809 1661480 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0804 09:01:46.668020 1661480 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0804 09:01:46.668935 1661480 kubeadm.go:875] updating cluster {Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 09:01:46.669097 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:47.081840 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:47.467578 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:47.872001 1661480 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 09:01:47.872135 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:48.275938 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:48.676410 1661480 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:01:49.085653 1661480 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 09:01:49.106101 1661480 docker.go:703] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-699837
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0804 09:01:49.106124 1661480 docker.go:633] Images already preloaded, skipping extraction
	I0804 09:01:49.106192 1661480 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 09:01:49.124259 1661480 docker.go:703] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-699837
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0804 09:01:49.124275 1661480 cache_images.go:85] Images are preloaded, skipping loading
	I0804 09:01:49.124286 1661480 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0-beta.0 docker true true} ...
	I0804 09:01:49.124427 1661480 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-699837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 09:01:49.124491 1661480 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 09:01:49.170617 1661480 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0804 09:01:49.170646 1661480 cni.go:84] Creating CNI manager for ""
	I0804 09:01:49.170660 1661480 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:01:49.170668 1661480 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 09:01:49.170688 1661480 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-699837 NodeName:functional-699837 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 09:01:49.170805 1661480 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-699837"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 09:01:49.170853 1661480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 09:01:49.178893 1661480 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 09:01:49.178936 1661480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 09:01:49.186387 1661480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0804 09:01:49.201786 1661480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 09:01:49.217510 1661480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0804 09:01:49.233089 1661480 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0804 09:01:49.236403 1661480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:01:49.323526 1661480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 09:01:49.333766 1661480 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837 for IP: 192.168.49.2
	I0804 09:01:49.333778 1661480 certs.go:194] generating shared ca certs ...
	I0804 09:01:49.333793 1661480 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:01:49.333937 1661480 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 09:01:49.333980 1661480 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 09:01:49.333986 1661480 certs.go:256] generating profile certs ...
	I0804 09:01:49.334070 1661480 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.key
	I0804 09:01:49.334108 1661480 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key.5971bdc2
	I0804 09:01:49.334140 1661480 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key
	I0804 09:01:49.334230 1661480 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 09:01:49.334251 1661480 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 09:01:49.334257 1661480 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 09:01:49.334275 1661480 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 09:01:49.334296 1661480 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 09:01:49.334317 1661480 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 09:01:49.334351 1661480 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 09:01:49.334909 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 09:01:49.355952 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 09:01:49.376603 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 09:01:49.397019 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 09:01:49.417530 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 09:01:49.437950 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 09:01:49.457994 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 09:01:49.478390 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 09:01:49.498988 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 09:01:49.519691 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 09:01:49.540289 1661480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 09:01:49.560954 1661480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 09:01:49.576254 1661480 ssh_runner.go:195] Run: openssl version
	I0804 09:01:49.581261 1661480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 09:01:49.589514 1661480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:01:49.592478 1661480 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:01:49.592512 1661480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:01:49.598570 1661480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 09:01:49.606091 1661480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 09:01:49.613958 1661480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 09:01:49.616884 1661480 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 09:01:49.616913 1661480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 09:01:49.622974 1661480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 09:01:49.630466 1661480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 09:01:49.638717 1661480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 09:01:49.641763 1661480 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 09:01:49.641800 1661480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 09:01:49.648809 1661480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 09:01:49.656437 1661480 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 09:01:49.659644 1661480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 09:01:49.665529 1661480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 09:01:49.671334 1661480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 09:01:49.677030 1661480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 09:01:49.682628 1661480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 09:01:49.688419 1661480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 09:01:49.694068 1661480 kubeadm.go:392] StartCluster: {Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:01:49.694169 1661480 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 09:01:49.711391 1661480 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 09:01:49.719062 1661480 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 09:01:49.719070 1661480 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0804 09:01:49.719111 1661480 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 09:01:49.726688 1661480 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 09:01:49.727133 1661480 kubeconfig.go:125] found "functional-699837" server: "https://192.168.49.2:8441"
	I0804 09:01:49.728393 1661480 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 09:01:49.735849 1661480 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-08-04 08:47:09.659345836 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-08-04 09:01:49.228640689 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I0804 09:01:49.735860 1661480 kubeadm.go:1152] stopping kube-system containers ...
	I0804 09:01:49.735896 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 09:01:49.755611 1661480 docker.go:496] Stopping containers: [54bef897d3ad 5e988e8b274a 16527e0d8c26 14c7dc479dba 243f1d3d8950 2fafac7520c8 a70a68ec6169 340fbe431c80 3206d43d6e58 6196286ba923 87c98d51b11a 4dc39892c792 a670d9d90ef4 0cb03d71b984 cdae8372eae9]
	I0804 09:01:49.755668 1661480 ssh_runner.go:195] Run: docker stop 54bef897d3ad 5e988e8b274a 16527e0d8c26 14c7dc479dba 243f1d3d8950 2fafac7520c8 a70a68ec6169 340fbe431c80 3206d43d6e58 6196286ba923 87c98d51b11a 4dc39892c792 a670d9d90ef4 0cb03d71b984 cdae8372eae9
	I0804 09:01:49.833087 1661480 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 09:01:49.988574 1661480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 09:01:49.996961 1661480 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Aug  4 08:51 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5628 Aug  4 08:51 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Aug  4 08:51 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Aug  4 08:51 /etc/kubernetes/scheduler.conf
	
	I0804 09:01:49.996998 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0804 09:01:50.004698 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0804 09:01:50.012067 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0804 09:01:50.012114 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 09:01:50.019467 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0804 09:01:50.027050 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0804 09:01:50.027082 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 09:01:50.034408 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0804 09:01:50.041768 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0804 09:01:50.041795 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 09:01:50.049038 1661480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 09:01:50.056613 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:01:50.095874 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:01:52.185164 1661480 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.089256416s)
	I0804 09:01:52.185190 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:01:52.321482 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:01:52.369615 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:01:52.486402 1661480 api_server.go:52] waiting for apiserver process to appear ...
	I0804 09:01:52.486480 1661480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:01:52.986660 1661480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:01:53.487520 1661480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:01:53.499325 1661480 api_server.go:72] duration metric: took 1.012937004s to wait for apiserver process to appear ...
	I0804 09:01:53.499341 1661480 api_server.go:88] waiting for apiserver healthz status ...
	I0804 09:01:53.499366 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:01:58.500087 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:01:58.500130 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:03.500427 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:03.500461 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:08.502025 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:08.502061 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:13.503279 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:13.503317 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:14.779567 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": read tcp 192.168.49.1:33220->192.168.49.2:8441: read: connection reset by peer
	I0804 09:02:14.779627 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:14.780024 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:15.000448 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:15.000951 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:15.499579 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:15.499998 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:15.999661 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:21.000340 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:21.000373 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:26.001332 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:26.001368 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:31.002000 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:31.002033 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:36.005328 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:02:36.005357 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:36.551344 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": read tcp 192.168.49.1:35998->192.168.49.2:8441: read: connection reset by peer
	I0804 09:02:36.551397 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:36.551841 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:36.999411 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:36.999848 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:37.500408 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:37.500946 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:37.999558 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:37.999957 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:38.499584 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:38.500029 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:38.999644 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:39.000099 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:39.499738 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:39.500213 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:39.999937 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:40.000357 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:40.500064 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:40.500521 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:40.999940 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:41.000330 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:41.500057 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:41.500511 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:42.000224 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:42.000633 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:42.500342 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:42.500765 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:43.000455 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:43.000936 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:43.499548 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:43.499961 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:43.999579 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:43.999966 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:44.499598 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:44.500010 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:44.999630 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:45.000087 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:45.499708 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:45.500143 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:45.999756 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:46.000186 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:46.499807 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:46.500248 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:46.999865 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:47.000330 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:47.500068 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:47.500472 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:48.000163 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:48.000618 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:48.500337 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:48.500730 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:49.000434 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:49.000869 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:49.499503 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:49.499937 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:49.999501 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:49.999940 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:50.499602 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:50.500057 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:50.999688 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:51.000139 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:51.499774 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:51.500227 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:51.999865 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:52.000295 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:52.500025 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:52.500526 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:53.000242 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:53.000634 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:53.500441 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:02:53.519729 1661480 logs.go:282] 2 containers: [535dc83f2f73 a70a68ec6169]
	I0804 09:02:53.519801 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:02:53.538762 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:02:53.538813 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:02:53.556054 1661480 logs.go:282] 0 containers: []
	W0804 09:02:53.556070 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:02:53.556116 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:02:53.573504 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:02:53.573556 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:02:53.590727 1661480 logs.go:282] 0 containers: []
	W0804 09:02:53.590742 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:02:53.590784 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:02:53.608494 1661480 logs.go:282] 1 containers: [0bd5610c8547]
	I0804 09:02:53.608550 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:02:53.625413 1661480 logs.go:282] 0 containers: []
	W0804 09:02:53.625424 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:02:53.625435 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:02:53.625443 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:02:53.665235 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:02:53.665279 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:02:53.683621 1661480 logs.go:123] Gathering logs for kube-apiserver [535dc83f2f73] ...
	I0804 09:02:53.683636 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 535dc83f2f73"
	I0804 09:02:53.708748 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:02:53.708766 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:02:53.729347 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:02:53.729362 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:02:53.770407 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:02:53.770428 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:02:53.852664 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:02:53.852687 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:02:53.907229 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:02:53.900372   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.900835   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.902406   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.902856   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.904351   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:02:53.900372   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.900835   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.902406   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.902856   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:53.904351   13057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:02:53.907253 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:02:53.907266 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:02:53.932272 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:02:53.932289 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:02:53.966223 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:02:53.966245 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:02:54.018841 1661480 logs.go:123] Gathering logs for kube-controller-manager [0bd5610c8547] ...
	I0804 09:02:54.018859 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd5610c8547"
	I0804 09:02:56.541137 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:02:56.541605 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:02:56.541686 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:02:56.560651 1661480 logs.go:282] 2 containers: [535dc83f2f73 a70a68ec6169]
	I0804 09:02:56.560710 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:02:56.578753 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:02:56.578815 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:02:56.596005 1661480 logs.go:282] 0 containers: []
	W0804 09:02:56.596019 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:02:56.596059 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:02:56.613187 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:02:56.613269 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:02:56.629991 1661480 logs.go:282] 0 containers: []
	W0804 09:02:56.630005 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:02:56.630051 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:02:56.647935 1661480 logs.go:282] 1 containers: [0bd5610c8547]
	I0804 09:02:56.648000 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:02:56.665663 1661480 logs.go:282] 0 containers: []
	W0804 09:02:56.665677 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:02:56.665686 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:02:56.665696 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:02:56.703183 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:02:56.703200 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:02:56.757823 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:02:56.750851   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.751407   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.752950   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.753405   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.754929   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:02:56.750851   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.751407   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.752950   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.753405   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:02:56.754929   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:02:56.757834 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:02:56.757846 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:02:56.793009 1661480 logs.go:123] Gathering logs for kube-controller-manager [0bd5610c8547] ...
	I0804 09:02:56.793031 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd5610c8547"
	I0804 09:02:56.814543 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:02:56.814560 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:02:56.858353 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:02:56.858374 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:02:56.938490 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:02:56.938512 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:02:56.957429 1661480 logs.go:123] Gathering logs for kube-apiserver [535dc83f2f73] ...
	I0804 09:02:56.957445 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 535dc83f2f73"
	I0804 09:02:56.982565 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:02:56.982582 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:02:57.007749 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:02:57.007767 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:02:57.027909 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:02:57.027926 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:02:59.582075 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:04.583858 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:03:04.583974 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:04.603429 1661480 logs.go:282] 3 containers: [a20e277f239a 535dc83f2f73 a70a68ec6169]
	I0804 09:03:04.603486 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:04.621192 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:03:04.621271 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:04.638764 1661480 logs.go:282] 0 containers: []
	W0804 09:03:04.638780 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:04.638831 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:04.656957 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:04.657045 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:04.673865 1661480 logs.go:282] 0 containers: []
	W0804 09:03:04.673881 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:04.673937 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:04.691557 1661480 logs.go:282] 1 containers: [0bd5610c8547]
	I0804 09:03:04.691645 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:04.709384 1661480 logs.go:282] 0 containers: []
	W0804 09:03:04.709397 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:04.709412 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:04.709425 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:04.728509 1661480 logs.go:123] Gathering logs for kube-apiserver [535dc83f2f73] ...
	I0804 09:03:04.728525 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 535dc83f2f73"
	I0804 09:03:04.753446 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:03:04.753464 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:03:04.772841 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:04.772865 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 09:03:19.398944 1661480 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (14.626059536s)
	W0804 09:03:19.398974 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:14.821564   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:03:19.391583   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:42134->[::1]:8441: read: connection reset by peer"
	E0804 09:03:19.392195   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:19.393996   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:19.394458   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:14.821564   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:03:19.391583   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:42134->[::1]:8441: read: connection reset by peer"
	E0804 09:03:19.392195   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:19.393996   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:19.394458   13461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:19.398986 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:19.398996 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:19.427211 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:19.427230 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:19.452181 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:19.452199 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:19.488740 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:19.488758 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:19.543335 1661480 logs.go:123] Gathering logs for kube-controller-manager [0bd5610c8547] ...
	I0804 09:03:19.543361 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd5610c8547"
	I0804 09:03:19.564213 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:19.564229 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:19.604899 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:19.604921 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:19.642424 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:19.642448 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:22.221477 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:22.222040 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:22.222143 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:22.241050 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:22.241115 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:22.258165 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:03:22.258242 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:22.276561 1661480 logs.go:282] 0 containers: []
	W0804 09:03:22.276574 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:22.276617 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:22.295029 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:22.295092 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:22.312122 1661480 logs.go:282] 0 containers: []
	W0804 09:03:22.312132 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:22.312182 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:22.329412 1661480 logs.go:282] 2 containers: [ef4985b5f2b9 0bd5610c8547]
	I0804 09:03:22.329488 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:22.346310 1661480 logs.go:282] 0 containers: []
	W0804 09:03:22.346323 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:22.346333 1661480 logs.go:123] Gathering logs for kube-controller-manager [0bd5610c8547] ...
	I0804 09:03:22.346343 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd5610c8547"
	I0804 09:03:22.367806 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:22.367821 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:22.445841 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:22.445861 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:22.471474 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:22.471489 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:22.496759 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:03:22.496775 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:03:22.517309 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:22.517327 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:22.557714 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:22.557732 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:22.593146 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:22.593170 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:22.611504 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:22.611518 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:22.665839 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:22.658662   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.659228   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.660791   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.661206   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.662674   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:22.658662   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.659228   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.660791   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.661206   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:22.662674   13782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:22.665851 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:22.665861 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:22.702988 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:22.703006 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:22.755945 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:22.755968 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:25.277601 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:25.278136 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:25.278248 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:25.297160 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:25.297216 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:25.316643 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:03:25.316709 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:25.334387 1661480 logs.go:282] 0 containers: []
	W0804 09:03:25.334404 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:25.334454 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:25.351774 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:25.351842 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:25.369473 1661480 logs.go:282] 0 containers: []
	W0804 09:03:25.369485 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:25.369530 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:25.387080 1661480 logs.go:282] 2 containers: [ef4985b5f2b9 0bd5610c8547]
	I0804 09:03:25.387143 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:25.404296 1661480 logs.go:282] 0 containers: []
	W0804 09:03:25.404309 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:25.404318 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:25.404329 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:25.422982 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:25.422997 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:25.476224 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:25.468440   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.468969   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.470557   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.471704   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.472278   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:25.468440   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.468969   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.470557   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.471704   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:25.472278   13910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:25.476235 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:25.476245 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:25.501952 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:03:25.501972 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:03:25.522116 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:25.522135 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:25.559523 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:25.559539 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:25.611041 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:25.611060 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:25.631550 1661480 logs.go:123] Gathering logs for kube-controller-manager [0bd5610c8547] ...
	I0804 09:03:25.631569 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd5610c8547"
	I0804 09:03:25.652151 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:25.652168 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:25.726816 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:25.726837 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:25.752766 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:25.752786 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:25.796279 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:25.796296 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:28.337315 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:28.337785 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:28.337864 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:28.356559 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:28.356610 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:28.374336 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:03:28.374386 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:28.391793 1661480 logs.go:282] 0 containers: []
	W0804 09:03:28.391806 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:28.391847 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:28.410341 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:28.410399 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:28.427793 1661480 logs.go:282] 0 containers: []
	W0804 09:03:28.427809 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:28.427859 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:28.444847 1661480 logs.go:282] 2 containers: [ef4985b5f2b9 0bd5610c8547]
	I0804 09:03:28.444924 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:28.462592 1661480 logs.go:282] 0 containers: []
	W0804 09:03:28.462609 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:28.462619 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:28.462631 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:28.482600 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:28.482615 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:28.507602 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:03:28.507619 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:03:28.526984 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:28.526998 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:28.577894 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:28.577914 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:28.597919 1661480 logs.go:123] Gathering logs for kube-controller-manager [0bd5610c8547] ...
	I0804 09:03:28.597936 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd5610c8547"
	I0804 09:03:28.617782 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:28.617797 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:28.660530 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:28.660549 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:28.698114 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:28.698131 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:28.771090 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:28.771114 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:28.825345 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:28.818550   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.819081   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.820612   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.821003   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.822518   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:28.818550   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.819081   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.820612   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.821003   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:28.822518   14179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:28.825358 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:28.825372 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:28.851539 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:28.851559 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:31.390425 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:31.390852 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:31.390931 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:31.410612 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:31.410681 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:31.428091 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:03:31.428165 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:31.446602 1661480 logs.go:282] 0 containers: []
	W0804 09:03:31.446621 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:31.446675 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:31.464168 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:31.464223 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:31.481049 1661480 logs.go:282] 0 containers: []
	W0804 09:03:31.481063 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:31.481115 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:31.497227 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:03:31.497311 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:31.513575 1661480 logs.go:282] 0 containers: []
	W0804 09:03:31.513586 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:31.513594 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:31.513604 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:31.567139 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:31.558828   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.559407   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.561385   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.562296   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.563788   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:31.558828   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.559407   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.561385   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.562296   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:31.563788   14309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:31.567151 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:31.567162 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:31.591977 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:31.591994 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:31.644763 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:31.644783 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:31.664981 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:31.664997 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:31.708596 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:31.708616 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:31.734001 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:03:31.734019 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:03:31.753980 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:31.754000 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:31.789591 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:31.789609 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:31.825063 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:31.825082 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:31.904005 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:31.904027 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:34.424932 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:34.425333 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:34.425419 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:34.444542 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:34.444596 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:34.461912 1661480 logs.go:282] 1 containers: [6986b6d5499e]
	I0804 09:03:34.461985 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:34.479889 1661480 logs.go:282] 0 containers: []
	W0804 09:03:34.479903 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:34.479953 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:34.497552 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:34.497604 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:34.515003 1661480 logs.go:282] 0 containers: []
	W0804 09:03:34.515014 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:34.515053 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:34.532842 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:03:34.532909 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:34.549350 1661480 logs.go:282] 0 containers: []
	W0804 09:03:34.549362 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:34.549371 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:34.549381 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:34.567689 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:34.567704 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:34.605688 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:34.605703 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:34.625847 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:34.625861 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:34.668000 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:34.668021 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:34.742105 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:34.742129 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:34.797022 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:34.790082   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.790655   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.792223   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.792752   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.794335   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:34.790082   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.790655   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.792223   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.792752   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:34.794335   14522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:34.797034 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:34.797047 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:34.822397 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:34.822417 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:34.849317 1661480 logs.go:123] Gathering logs for etcd [6986b6d5499e] ...
	I0804 09:03:34.849334 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6986b6d5499e"
	I0804 09:03:34.869225 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:34.869259 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:34.923527 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:34.923548 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:37.459936 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:37.460377 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:37.460466 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:37.479380 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:37.479441 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:37.497080 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:03:37.497149 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:37.514761 1661480 logs.go:282] 0 containers: []
	W0804 09:03:37.514778 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:37.514824 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:37.532588 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:37.532656 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:37.550208 1661480 logs.go:282] 0 containers: []
	W0804 09:03:37.550224 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:37.550275 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:37.568463 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:03:37.568527 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:37.585787 1661480 logs.go:282] 0 containers: []
	W0804 09:03:37.585800 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:37.585809 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:37.585821 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:37.659045 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:37.659073 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:37.685717 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:03:37.685735 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:03:37.704291 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:37.704307 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:37.741922 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:37.741943 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:37.793694 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:37.793713 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:37.813368 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:37.813385 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:37.848883 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:37.848900 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:37.867491 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:37.867505 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:37.921199 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:37.913356   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.913927   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.916144   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.916563   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.918058   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:37.913356   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.913927   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.916144   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.916563   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:37.918058   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:37.921219 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:37.921231 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:37.947342 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:37.947359 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:40.489125 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:40.489554 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:40.489630 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:40.508607 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:40.508669 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:40.528138 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:03:40.528187 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:40.545305 1661480 logs.go:282] 0 containers: []
	W0804 09:03:40.545318 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:40.545357 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:40.562122 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:40.562191 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:40.579129 1661480 logs.go:282] 0 containers: []
	W0804 09:03:40.579144 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:40.579191 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:40.597048 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:03:40.597124 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:40.614353 1661480 logs.go:282] 0 containers: []
	W0804 09:03:40.614368 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:40.614378 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:03:40.614390 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:03:40.634206 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:40.634222 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:40.653989 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:40.654006 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:40.672246 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:40.672260 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:40.726229 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:40.719031   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.719524   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.721096   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.721545   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.723074   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:40.719031   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.719524   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.721096   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.721545   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:40.723074   14953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:40.726242 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:40.726257 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:40.766179 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:40.766200 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:40.821048 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:40.821069 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:40.864128 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:40.864147 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:40.900068 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:40.900085 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:40.973288 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:40.973310 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:41.000020 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:41.000039 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:43.525994 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:43.526421 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:43.526503 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:43.545290 1661480 logs.go:282] 2 containers: [a20e277f239a a70a68ec6169]
	I0804 09:03:43.545349 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:43.562985 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:03:43.563038 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:43.579516 1661480 logs.go:282] 0 containers: []
	W0804 09:03:43.579532 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:43.579582 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:43.597186 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:43.597261 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:43.613554 1661480 logs.go:282] 0 containers: []
	W0804 09:03:43.613568 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:43.613609 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:43.631061 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:03:43.631120 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:43.649100 1661480 logs.go:282] 0 containers: []
	W0804 09:03:43.649114 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:43.649125 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:43.649144 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:43.667561 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:43.667577 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:03:43.721973 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:43.714008   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.714530   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.717089   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.717552   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.719095   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:43.714008   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.714530   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.717089   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.717552   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:03:43.719095   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:03:43.721984 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:03:43.721995 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:03:43.742540 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:03:43.742556 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:03:43.780241 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:03:43.780259 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:03:43.834318 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:03:43.834339 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:03:43.869987 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:43.870005 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:43.946032 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:03:43.946053 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	I0804 09:03:43.973679 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:03:43.973697 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:03:43.998917 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:03:43.998935 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:03:44.019361 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:03:44.019378 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:03:46.564446 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:03:46.564898 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:03:46.564992 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:03:46.584902 1661480 logs.go:282] 3 containers: [20f5be32354b a20e277f239a a70a68ec6169]
	I0804 09:03:46.585028 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:03:46.610427 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:03:46.610492 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:03:46.627832 1661480 logs.go:282] 0 containers: []
	W0804 09:03:46.627848 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:03:46.627896 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:03:46.662895 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:03:46.662956 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:03:46.679864 1661480 logs.go:282] 0 containers: []
	W0804 09:03:46.679882 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:03:46.679929 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:03:46.697936 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:03:46.697999 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:03:46.716993 1661480 logs.go:282] 0 containers: []
	W0804 09:03:46.717008 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:03:46.717020 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:03:46.717029 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:03:46.790622 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:03:46.790643 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:03:46.809548 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:03:46.809566 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 09:04:08.045069 1661480 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.235482683s)
	W0804 09:04:08.045100 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:03:56.860697   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:04:06.861827   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:04:08.039221   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:51136->[::1]:8441: read: connection reset by peer"
	E0804 09:04:08.039948   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:08.041660   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:03:56.860697   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:04:06.861827   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:04:08.039221   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:51136->[::1]:8441: read: connection reset by peer"
	E0804 09:04:08.039948   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:08.041660   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:08.045109 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:08.045120 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:08.071094 1661480 logs.go:123] Gathering logs for kube-apiserver [a20e277f239a] ...
	I0804 09:04:08.071112 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20e277f239a"
	W0804 09:04:08.089428 1661480 logs.go:130] failed kube-apiserver [a20e277f239a]: command: /bin/bash -c "docker logs --tail 400 a20e277f239a" /bin/bash -c "docker logs --tail 400 a20e277f239a": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: a20e277f239a
	 output: 
	** stderr ** 
	Error response from daemon: No such container: a20e277f239a
	
	** /stderr **
	I0804 09:04:08.089437 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:08.089448 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:08.129150 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:08.129169 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:08.185332 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:08.185356 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:08.207810 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:08.207830 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:08.233521 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:08.233539 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:08.253969 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:08.253985 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:08.299455 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:08.299476 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:10.840062 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:10.840666 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:10.840762 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:10.860521 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:10.860576 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:10.877749 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:10.877804 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:10.894797 1661480 logs.go:282] 0 containers: []
	W0804 09:04:10.894809 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:10.894851 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:10.911920 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:10.911993 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:10.929397 1661480 logs.go:282] 0 containers: []
	W0804 09:04:10.929412 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:10.929461 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:10.947092 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:04:10.947149 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:10.964066 1661480 logs.go:282] 0 containers: []
	W0804 09:04:10.964083 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:10.964095 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:10.964107 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:10.983914 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:10.983930 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:11.020490 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:11.020510 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:11.039187 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:11.039203 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:11.095001 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:11.087446   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.087938   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.089522   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.089962   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.091585   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:11.087446   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.087938   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.089522   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.089962   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:11.091585   15626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:11.095012 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:11.095022 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:11.120789 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:11.120807 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:11.146008 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:11.146024 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:11.166112 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:11.166128 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:11.204792 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:11.204810 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:11.249456 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:11.249479 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:11.325884 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:11.325911 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:13.884709 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:13.885223 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:13.885353 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:13.904359 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:13.904417 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:13.922238 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:13.922302 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:13.939358 1661480 logs.go:282] 0 containers: []
	W0804 09:04:13.939372 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:13.939426 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:13.956853 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:13.956910 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:13.974857 1661480 logs.go:282] 0 containers: []
	W0804 09:04:13.974869 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:13.974908 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:13.992568 1661480 logs.go:282] 1 containers: [ef4985b5f2b9]
	I0804 09:04:13.992628 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:14.009924 1661480 logs.go:282] 0 containers: []
	W0804 09:04:14.009937 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:14.009947 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:14.009962 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:14.061962 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:14.061980 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:14.105751 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:14.105768 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:14.159867 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:14.152559   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.153066   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.154592   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.154981   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.156381   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:14.152559   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.153066   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.154592   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.154981   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:14.156381   15793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:14.159880 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:14.159892 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:14.180879 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:14.180897 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:14.223204 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:14.223223 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:14.244081 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:14.244097 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:14.279867 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:14.279884 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:14.357345 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:14.357368 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:14.375771 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:14.375787 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:14.401599 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:14.401615 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:16.929311 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:16.929726 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:16.929806 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:16.949884 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:16.949946 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:16.966827 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:16.966875 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:16.984179 1661480 logs.go:282] 0 containers: []
	W0804 09:04:16.984194 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:16.984241 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:17.001543 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:17.001596 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:17.018974 1661480 logs.go:282] 0 containers: []
	W0804 09:04:17.018985 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:17.019032 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:17.037024 1661480 logs.go:282] 2 containers: [9d4ac6608b3c ef4985b5f2b9]
	I0804 09:04:17.037087 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:17.067627 1661480 logs.go:282] 0 containers: []
	W0804 09:04:17.067640 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:17.067650 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:17.067662 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:17.089231 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:17.089266 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:17.145083 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:17.137004   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.137530   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.139081   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.139547   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.141048   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:17.137004   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.137530   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.139081   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.139547   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:17.141048   16023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:17.145095 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:17.145107 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:17.183037 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:17.183057 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:17.224495 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:17.224513 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:17.277939 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:17.277961 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:17.299213 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:17.299229 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:17.343379 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:17.343397 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:17.368834 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:17.368850 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:17.388736 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:17.388752 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:17.408859 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:17.408875 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:17.445491 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:17.445507 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:20.023254 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:20.023726 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:20.023805 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:20.042775 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:20.042834 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:20.060600 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:20.060658 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:20.078019 1661480 logs.go:282] 0 containers: []
	W0804 09:04:20.078036 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:20.078074 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:20.096002 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:20.096071 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:20.112684 1661480 logs.go:282] 0 containers: []
	W0804 09:04:20.112698 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:20.112741 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:20.130951 1661480 logs.go:282] 2 containers: [9d4ac6608b3c ef4985b5f2b9]
	I0804 09:04:20.131021 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:20.147664 1661480 logs.go:282] 0 containers: []
	W0804 09:04:20.147675 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:20.147685 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:20.147696 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:20.166143 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:20.166161 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:20.221888 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:20.214386   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.214988   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.216543   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.216938   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.218460   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:20.214386   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.214988   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.216543   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.216938   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:20.218460   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:20.221899 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:20.221912 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:20.247606 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:20.247623 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:20.269435 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:20.269454 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:20.322915 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:20.322934 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:20.344869 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:20.344885 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:20.388193 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:20.388210 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:20.424170 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:20.424187 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:20.496074 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:20.496094 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:20.522349 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:20.522368 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:20.563687 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:20.563710 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:23.085074 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:23.085599 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:23.085689 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:23.104776 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:23.104833 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:23.122616 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:23.122682 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:23.140381 1661480 logs.go:282] 0 containers: []
	W0804 09:04:23.140396 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:23.140449 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:23.158043 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:23.158105 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:23.175945 1661480 logs.go:282] 0 containers: []
	W0804 09:04:23.175960 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:23.176004 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:23.193909 1661480 logs.go:282] 2 containers: [9d4ac6608b3c ef4985b5f2b9]
	I0804 09:04:23.193981 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:23.211258 1661480 logs.go:282] 0 containers: []
	W0804 09:04:23.211272 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:23.211282 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:23.211292 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:23.236427 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:23.236445 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:23.275922 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:23.275944 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:23.296315 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:23.296332 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:23.317009 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:23.317026 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:23.357932 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:23.357953 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:23.394105 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:23.394122 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:23.467404 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:23.467423 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:23.494717 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:23.494734 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:23.515040 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:23.515055 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:23.566202 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:23.566221 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:23.586603 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:23.586621 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:23.640949 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:23.633581   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.634121   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.635682   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.636105   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.637658   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:23.633581   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.634121   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.635682   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.636105   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:23.637658   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:26.142544 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:26.143011 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:26.143111 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:26.163238 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:26.163305 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:26.181526 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:26.181598 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:26.198994 1661480 logs.go:282] 0 containers: []
	W0804 09:04:26.199008 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:26.199055 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:26.216773 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:26.216843 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:26.234131 1661480 logs.go:282] 0 containers: []
	W0804 09:04:26.234150 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:26.234204 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:26.251698 1661480 logs.go:282] 2 containers: [9d4ac6608b3c ef4985b5f2b9]
	I0804 09:04:26.251757 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:26.269113 1661480 logs.go:282] 0 containers: []
	W0804 09:04:26.269125 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:26.269136 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:26.269147 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:26.309761 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:26.309780 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:26.362115 1661480 logs.go:123] Gathering logs for kube-controller-manager [ef4985b5f2b9] ...
	I0804 09:04:26.362133 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef4985b5f2b9"
	I0804 09:04:26.382406 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:26.382421 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:26.427317 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:26.427338 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:26.445864 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:26.445879 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:26.470826 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:26.470845 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:26.490799 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:26.490814 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:26.526252 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:26.526276 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:26.599966 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:26.599993 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:26.655307 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:26.648488   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.649034   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.650536   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.650909   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.652405   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:26.648488   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.649034   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.650536   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.650909   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:26.652405   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:26.655322 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:26.655332 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:26.680910 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:26.680927 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:29.201316 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:29.201803 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:29.201888 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:29.220916 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:29.220981 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:29.240273 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:29.240334 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:29.258749 1661480 logs.go:282] 0 containers: []
	W0804 09:04:29.258769 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:29.258820 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:29.276728 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:29.276789 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:29.294103 1661480 logs.go:282] 0 containers: []
	W0804 09:04:29.294118 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:29.294162 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:29.312051 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:29.312121 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:29.329450 1661480 logs.go:282] 0 containers: []
	W0804 09:04:29.329463 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:29.329472 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:29.329482 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:29.406478 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:29.406501 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:29.449867 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:29.449885 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:29.505732 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:29.505753 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:29.527260 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:29.527278 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:29.568876 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:29.568900 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:29.588395 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:29.588411 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:29.642645 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:29.635519   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.636038   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.637658   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.638071   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.639537   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:29.635519   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.636038   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.637658   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.638071   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:29.639537   16845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:29.642654 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:29.642665 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:29.668637 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:29.668654 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:29.693869 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:29.693888 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:29.714488 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:29.714503 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:32.250740 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:32.251210 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:32.251290 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:32.270825 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:32.270884 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:32.288747 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:32.288802 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:32.306493 1661480 logs.go:282] 0 containers: []
	W0804 09:04:32.306505 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:32.306552 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:32.323960 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:32.324014 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:32.341171 1661480 logs.go:282] 0 containers: []
	W0804 09:04:32.341187 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:32.341230 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:32.358803 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:32.358860 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:32.375636 1661480 logs.go:282] 0 containers: []
	W0804 09:04:32.375647 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:32.375657 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:32.375670 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:32.395884 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:32.395899 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:32.438480 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:32.438499 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:32.482900 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:32.482918 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:32.518645 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:32.518662 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:32.591929 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:32.591950 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:32.644879 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:32.644899 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:32.665398 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:32.665413 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:32.684813 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:32.684830 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:32.738309 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:32.731481   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.731997   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.733547   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.733950   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.735467   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:32.731481   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.731997   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.733547   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.733950   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:32.735467   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:32.738320 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:32.738331 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:32.763969 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:32.763987 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:35.291352 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:35.291810 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:35.291895 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:35.311568 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:35.311636 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:35.329568 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:35.329650 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:35.347266 1661480 logs.go:282] 0 containers: []
	W0804 09:04:35.347276 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:35.347315 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:35.364992 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:35.365054 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:35.381643 1661480 logs.go:282] 0 containers: []
	W0804 09:04:35.381657 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:35.381696 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:35.398762 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:35.398830 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:35.415553 1661480 logs.go:282] 0 containers: []
	W0804 09:04:35.415568 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:35.415579 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:35.415590 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:35.434052 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:35.434066 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:35.488645 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:35.481621   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.482093   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.483610   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.483982   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.485495   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:35.481621   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.482093   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.483610   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.483982   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:35.485495   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:35.488656 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:35.488666 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:35.532366 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:35.532384 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:35.552538 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:35.552555 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:35.588052 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:35.588072 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:35.666164 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:35.666184 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:35.693682 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:35.693700 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:35.718989 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:35.719004 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:35.739132 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:35.739149 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:35.792779 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:35.792799 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:38.337951 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:38.338399 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:38.338478 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:38.357165 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:38.357226 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:38.374097 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:38.374155 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:38.391382 1661480 logs.go:282] 0 containers: []
	W0804 09:04:38.391396 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:38.391442 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:38.408993 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:38.409051 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:38.426050 1661480 logs.go:282] 0 containers: []
	W0804 09:04:38.426065 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:38.426108 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:38.443913 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:38.443969 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:38.460846 1661480 logs.go:282] 0 containers: []
	W0804 09:04:38.460858 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:38.460868 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:38.460883 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:38.538741 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:38.538763 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:38.557324 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:38.557344 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:38.611322 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:38.604134   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.604668   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.606185   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.606583   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.607975   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:38.604134   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.604668   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.606185   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.606583   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:38.607975   17354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:38.611333 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:38.611344 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:38.651785 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:38.651803 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:38.704282 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:38.704300 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:38.748296 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:38.748316 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:38.788934 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:38.788954 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:38.813911 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:38.813928 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:38.838936 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:38.838953 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:38.858717 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:38.858736 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:41.379671 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:41.380124 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:41.380209 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:41.398983 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:41.399040 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:41.417150 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:41.417203 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:41.434806 1661480 logs.go:282] 0 containers: []
	W0804 09:04:41.434819 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:41.434860 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:41.452250 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:41.452314 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:41.469520 1661480 logs.go:282] 0 containers: []
	W0804 09:04:41.469535 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:41.469583 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:41.487739 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:41.487809 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:41.505191 1661480 logs.go:282] 0 containers: []
	W0804 09:04:41.505207 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:41.505219 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:41.505231 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:41.525061 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:41.525078 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:41.560648 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:41.560665 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:41.586056 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:41.586076 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:41.606348 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:41.606364 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:41.647048 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:41.647072 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:41.688983 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:41.689004 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:41.770298 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:41.770332 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:41.790956 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:41.790978 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:41.845157 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:41.838079   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.838593   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.840185   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.840709   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.842215   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:41.838079   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.838593   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.840185   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.840709   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:41.842215   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:41.845168 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:41.845179 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:41.870756 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:41.870774 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:44.425368 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:44.425831 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:44.425949 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:44.446645 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:44.446699 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:44.464564 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:44.464619 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:44.482513 1661480 logs.go:282] 0 containers: []
	W0804 09:04:44.482525 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:44.482568 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:44.500219 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:44.500270 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:44.517554 1661480 logs.go:282] 0 containers: []
	W0804 09:04:44.517571 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:44.517623 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:44.535531 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:44.535609 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:44.552895 1661480 logs.go:282] 0 containers: []
	W0804 09:04:44.552911 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:44.552922 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:44.552937 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:44.588906 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:44.588923 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:44.668044 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:44.668073 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:44.688833 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:44.688850 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:44.744103 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:44.737229   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.737782   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.739326   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.739679   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.741202   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:44.737229   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.737782   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.739326   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.739679   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:44.741202   17732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:44.744120 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:44.744132 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:44.771558 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:44.771575 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:44.798390 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:44.798407 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:44.818712 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:44.818730 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:44.860754 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:44.860771 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:44.903154 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:44.903172 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:44.959593 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:44.959614 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:47.481798 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:47.482267 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:47.482394 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:47.501436 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:47.501507 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:47.519403 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:47.519456 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:47.536505 1661480 logs.go:282] 0 containers: []
	W0804 09:04:47.536517 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:47.536559 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:47.555052 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:47.555108 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:47.572292 1661480 logs.go:282] 0 containers: []
	W0804 09:04:47.572308 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:47.572378 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:47.589316 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:47.589387 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:47.606568 1661480 logs.go:282] 0 containers: []
	W0804 09:04:47.606583 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:47.606592 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:47.606605 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:47.660924 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:47.654305   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.654756   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.656225   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.656600   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.658040   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:47.654305   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.654756   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.656225   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.656600   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:47.658040   17911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:47.660934 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:47.660945 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:47.686316 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:47.686336 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:47.711494 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:47.711510 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:47.755256 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:47.755279 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:47.808519 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:47.808541 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:47.829575 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:47.829592 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:47.850735 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:47.850752 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:47.892056 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:47.892076 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:47.929604 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:47.929623 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:48.003755 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:48.003779 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:50.522949 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:50.523426 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:50.523511 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:50.542559 1661480 logs.go:282] 2 containers: [20f5be32354b a70a68ec6169]
	I0804 09:04:50.542623 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:50.561817 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:50.561873 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:50.580293 1661480 logs.go:282] 0 containers: []
	W0804 09:04:50.580306 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:50.580358 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:50.598065 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:50.598132 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:50.615051 1661480 logs.go:282] 0 containers: []
	W0804 09:04:50.615064 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:50.615102 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:50.634158 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:50.634219 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:50.651067 1661480 logs.go:282] 0 containers: []
	W0804 09:04:50.651079 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:50.651088 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:04:50.651098 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	I0804 09:04:50.675452 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:04:50.675468 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	I0804 09:04:50.696108 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:04:50.696124 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:04:50.739266 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:04:50.739285 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:04:50.757817 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:50.757839 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:04:50.812181 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:04:50.805280   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.805733   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.807319   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.807746   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.809261   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:04:50.805280   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.805733   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.807319   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.807746   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:04:50.809261   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:04:50.812192 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:04:50.812204 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:04:50.837813 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:04:50.837830 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:04:50.881332 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:04:50.881350 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:04:50.933150 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:04:50.933172 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:04:50.955107 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:04:50.955127 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:04:50.991284 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:50.991302 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:53.570964 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:04:53.571444 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:04:53.571539 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:04:53.591352 1661480 logs.go:282] 3 containers: [45dd8fe239bc 20f5be32354b a70a68ec6169]
	I0804 09:04:53.591419 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:04:53.610707 1661480 logs.go:282] 1 containers: [e4c966ab8463]
	I0804 09:04:53.610764 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:04:53.630949 1661480 logs.go:282] 0 containers: []
	W0804 09:04:53.630964 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:04:53.631011 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:04:53.665523 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:04:53.665599 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:04:53.683393 1661480 logs.go:282] 0 containers: []
	W0804 09:04:53.683410 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:04:53.683463 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:04:53.700974 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:04:53.701080 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:04:53.719520 1661480 logs.go:282] 0 containers: []
	W0804 09:04:53.719534 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:04:53.719543 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:04:53.719556 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:04:53.801389 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:04:53.801410 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 09:05:15.553212 1661480 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.751766465s)
	W0804 09:05:15.553274 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:03.857554   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:05:13.859266   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:05:15.547844   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:55018->[::1]:8441: read: connection reset by peer"
	E0804 09:05:15.548469   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:15.550082   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:03.857554   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:05:13.859266   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 09:05:15.547844   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused - error from a previous attempt: read tcp [::1]:55018->[::1]:8441: read: connection reset by peer"
	E0804 09:05:15.548469   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:15.550082   18334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:15.553282 1661480 logs.go:123] Gathering logs for kube-apiserver [20f5be32354b] ...
	I0804 09:05:15.553295 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f5be32354b"
	W0804 09:05:15.571925 1661480 logs.go:130] failed kube-apiserver [20f5be32354b]: command: /bin/bash -c "docker logs --tail 400 20f5be32354b" /bin/bash -c "docker logs --tail 400 20f5be32354b": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 20f5be32354b
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 20f5be32354b
	
	** /stderr **
	I0804 09:05:15.571940 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:15.571956 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:15.597489 1661480 logs.go:123] Gathering logs for etcd [e4c966ab8463] ...
	I0804 09:05:15.597508 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4c966ab8463"
	W0804 09:05:15.615861 1661480 logs.go:130] failed etcd [e4c966ab8463]: command: /bin/bash -c "docker logs --tail 400 e4c966ab8463" /bin/bash -c "docker logs --tail 400 e4c966ab8463": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: e4c966ab8463
	 output: 
	** stderr ** 
	Error response from daemon: No such container: e4c966ab8463
	
	** /stderr **
	I0804 09:05:15.615870 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:15.615881 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:15.658508 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:15.658527 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:15.710914 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:15.710934 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:15.756829 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:15.756848 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:15.775591 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:15.775608 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:15.802209 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:15.802225 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:15.822675 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:15.822691 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:18.362881 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:18.363337 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:18.363427 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:18.382725 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:18.382780 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:18.400834 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:18.400903 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:18.418630 1661480 logs.go:282] 0 containers: []
	W0804 09:05:18.418643 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:18.418699 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:18.436449 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:18.436510 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:18.453593 1661480 logs.go:282] 0 containers: []
	W0804 09:05:18.453609 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:18.453670 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:18.470809 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:18.470867 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:18.487902 1661480 logs.go:282] 0 containers: []
	W0804 09:05:18.487915 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:18.487925 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:18.487935 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:18.570521 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:18.570543 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:18.625182 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:18.618258   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.618805   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.620328   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.620711   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.622272   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:18.618258   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.618805   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.620328   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.620711   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:18.622272   18641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:18.625193 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:18.625204 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:18.651165 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:18.651185 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:18.671188 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:18.671203 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:18.714383 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:18.714403 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:18.750997 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:18.751016 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:18.769854 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:18.769870 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:18.795165 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:18.795180 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:18.849360 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:18.849380 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:18.871229 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:18.871254 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:21.418353 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:21.418833 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:21.418922 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:21.438054 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:21.438113 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:21.455587 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:21.455654 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:21.472934 1661480 logs.go:282] 0 containers: []
	W0804 09:05:21.472954 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:21.473001 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:21.491717 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:21.491795 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:21.509543 1661480 logs.go:282] 0 containers: []
	W0804 09:05:21.509559 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:21.509604 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:21.527160 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:21.527217 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:21.544207 1661480 logs.go:282] 0 containers: []
	W0804 09:05:21.544222 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:21.544234 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:21.544243 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:21.563890 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:21.563904 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:21.583720 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:21.583737 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:21.602128 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:21.602141 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:21.658059 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:21.650567   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.651103   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.652665   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.653107   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.654674   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:21.650567   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.651103   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.652665   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.653107   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:21.654674   18843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:21.658074 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:21.658084 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:21.685555 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:21.685574 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:21.712525 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:21.712541 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:21.756390 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:21.756410 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:21.810403 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:21.810424 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:21.853991 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:21.854013 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:21.889567 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:21.889585 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:24.473851 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:24.474320 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:24.474415 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:24.493643 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:24.493706 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:24.511933 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:24.511991 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:24.529775 1661480 logs.go:282] 0 containers: []
	W0804 09:05:24.529790 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:24.529844 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:24.547893 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:24.547953 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:24.565265 1661480 logs.go:282] 0 containers: []
	W0804 09:05:24.565280 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:24.565322 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:24.582372 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:24.582439 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:24.600116 1661480 logs.go:282] 0 containers: []
	W0804 09:05:24.600132 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:24.600144 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:24.600157 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:24.625394 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:24.625413 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:24.649921 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:24.649938 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:24.669931 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:24.669947 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:24.724632 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:24.717099   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.717627   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.719144   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.719576   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.721085   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:24.717099   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.717627   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.719144   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.719576   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:24.721085   19028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:24.724643 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:24.724654 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:24.745114 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:24.745130 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:24.791138 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:24.791159 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:24.844211 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:24.844232 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:24.864815 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:24.864831 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:24.905868 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:24.905889 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:24.944193 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:24.944210 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:27.526606 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:27.527052 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:27.527133 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:27.546023 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:27.546102 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:27.564059 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:27.564125 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:27.581355 1661480 logs.go:282] 0 containers: []
	W0804 09:05:27.581372 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:27.581421 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:27.598969 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:27.599042 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:27.616326 1661480 logs.go:282] 0 containers: []
	W0804 09:05:27.616340 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:27.616398 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:27.633567 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:27.633636 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:27.650100 1661480 logs.go:282] 0 containers: []
	W0804 09:05:27.650116 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:27.650129 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:27.650143 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:27.674675 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:27.674691 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:27.694432 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:27.694452 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:27.740275 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:27.740293 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:27.792672 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:27.792692 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:27.837134 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:27.837152 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:27.862402 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:27.862418 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:27.884136 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:27.884160 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:27.921302 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:27.921320 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:28.005198 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:28.005221 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:28.024305 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:28.024319 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:28.078812 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:28.071766   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.072266   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.073814   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.074266   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.075728   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:28.071766   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.072266   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.073814   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.074266   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:28.075728   19278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:30.579425 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:30.579882 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:30.579979 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:30.599053 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:30.599118 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:30.616639 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:30.616706 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:30.634419 1661480 logs.go:282] 0 containers: []
	W0804 09:05:30.634434 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:30.634478 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:30.652037 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:30.652091 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:30.668537 1661480 logs.go:282] 0 containers: []
	W0804 09:05:30.668550 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:30.668601 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:30.686111 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:30.686177 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:30.703170 1661480 logs.go:282] 0 containers: []
	W0804 09:05:30.703183 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:30.703197 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:30.703208 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:30.780512 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:30.780534 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:30.835862 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:30.828571   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.829089   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.830648   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.831084   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.832656   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:30.828571   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.829089   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.830648   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.831084   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:30.832656   19369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:30.835871 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:30.835884 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:30.862953 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:30.862971 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:30.906430 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:30.906449 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:30.962204 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:30.962222 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:30.983077 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:30.983098 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:31.027250 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:31.027271 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:31.064477 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:31.064493 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:31.082683 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:31.082700 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:31.107897 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:31.107916 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:33.629309 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:33.629783 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:33.629874 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:33.649062 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:33.649144 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:33.667342 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:33.667406 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:33.684879 1661480 logs.go:282] 0 containers: []
	W0804 09:05:33.684891 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:33.684936 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:33.702256 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:33.702310 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:33.719436 1661480 logs.go:282] 0 containers: []
	W0804 09:05:33.719447 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:33.719486 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:33.737005 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:33.737062 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:33.754700 1661480 logs.go:282] 0 containers: []
	W0804 09:05:33.754716 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:33.754728 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:33.754740 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:33.830846 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:33.830868 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:33.856980 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:33.856997 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:33.909389 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:33.909410 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:33.929778 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:33.929794 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:33.965678 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:33.965696 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:33.984178 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:33.984194 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:34.038018 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:34.031060   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.031554   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.033042   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.033546   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.035064   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:34.031060   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.031554   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.033042   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.033546   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:34.035064   19608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:34.038028 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:34.038040 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:34.065147 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:34.065164 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:34.085201 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:34.085217 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:34.131576 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:34.131598 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:36.677320 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:36.677738 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:36.677816 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:36.696778 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:36.696834 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:36.714338 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:36.714400 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:36.731585 1661480 logs.go:282] 0 containers: []
	W0804 09:05:36.731597 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:36.731648 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:36.749262 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:36.749323 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:36.766369 1661480 logs.go:282] 0 containers: []
	W0804 09:05:36.766382 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:36.766424 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:36.783683 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:36.783747 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:36.800562 1661480 logs.go:282] 0 containers: []
	W0804 09:05:36.800577 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:36.800589 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:36.800601 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:36.826322 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:36.826341 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:36.846705 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:36.846725 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:36.900647 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:36.900670 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:36.945061 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:36.945082 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:36.980935 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:36.980953 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:36.999355 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:36.999370 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:37.045302 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:37.045321 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:37.066069 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:37.066087 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:37.147619 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:37.147641 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:37.204004 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:37.196190   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.197826   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.198292   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.199819   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.200207   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:37.196190   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.197826   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.198292   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.199819   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:37.200207   19818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:37.204017 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:37.204029 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:39.729976 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:39.730386 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:39.730457 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:39.749322 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:39.749391 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:39.767341 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:39.767399 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:39.783917 1661480 logs.go:282] 0 containers: []
	W0804 09:05:39.783928 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:39.783968 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:39.801060 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:39.801127 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:39.818194 1661480 logs.go:282] 0 containers: []
	W0804 09:05:39.818205 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:39.818259 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:39.835049 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:39.835119 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:39.851781 1661480 logs.go:282] 0 containers: []
	W0804 09:05:39.851792 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:39.851802 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:39.851811 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:39.871504 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:39.871519 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:39.926544 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:39.919634   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.920101   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.921669   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.922050   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.923665   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:39.919634   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.920101   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.921669   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.922050   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:39.923665   19922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:39.926554 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:39.926565 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:39.952624 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:39.952638 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:39.972011 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:39.972027 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:40.025874 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:40.025896 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:40.109801 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:40.109821 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:40.136255 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:40.136272 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:40.183580 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:40.183599 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:40.204493 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:40.204511 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:40.248273 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:40.248291 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:42.784699 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:42.785199 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:42.785329 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:42.804095 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:42.804174 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:42.821904 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:42.821955 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:42.839033 1661480 logs.go:282] 0 containers: []
	W0804 09:05:42.839045 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:42.839085 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:42.857060 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:42.857129 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:42.874536 1661480 logs.go:282] 0 containers: []
	W0804 09:05:42.874549 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:42.874606 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:42.892601 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:42.892659 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:42.910100 1661480 logs.go:282] 0 containers: []
	W0804 09:05:42.910120 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:42.910129 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:42.910139 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:42.934869 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:42.934885 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:42.953955 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:42.953974 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:43.006663 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:43.006683 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:43.053918 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:43.053939 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:43.090417 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:43.090434 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:43.174196 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:43.174219 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:43.192681 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:43.192699 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:43.248572 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:43.241692   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.242267   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.243809   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.244176   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.245595   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:43.241692   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.242267   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.243809   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.244176   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:43.245595   20157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:43.248582 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:43.248595 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:43.273840 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:43.273857 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:43.317403 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:43.317424 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:45.839142 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:45.839624 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:45.839725 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:45.858871 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:45.858933 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:45.877176 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:45.877228 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:45.894585 1661480 logs.go:282] 0 containers: []
	W0804 09:05:45.894599 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:45.894640 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:45.911858 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:45.911915 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:45.929219 1661480 logs.go:282] 0 containers: []
	W0804 09:05:45.929231 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:45.929293 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:45.946407 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:45.946463 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:45.964503 1661480 logs.go:282] 0 containers: []
	W0804 09:05:45.964514 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:45.964524 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:45.964532 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:46.041227 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:46.041258 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:46.096253 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:46.089547   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.090076   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.091586   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.091864   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.093286   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:46.089547   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.090076   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.091586   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.091864   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:46.093286   20280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:46.096264 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:46.096275 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:46.121027 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:46.121043 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:46.140652 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:46.140668 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:46.184099 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:46.184117 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:46.239471 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:46.239498 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:46.260203 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:46.260218 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:46.304661 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:46.304683 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:46.322929 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:46.322946 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:46.349597 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:46.349614 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:48.889394 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:48.889879 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:48.889967 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:48.909391 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:48.909453 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:48.927208 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:48.927271 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:48.944578 1661480 logs.go:282] 0 containers: []
	W0804 09:05:48.944589 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:48.944627 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:48.962359 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:48.962441 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:48.979597 1661480 logs.go:282] 0 containers: []
	W0804 09:05:48.979608 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:48.979646 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:48.996244 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:48.996323 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:49.013599 1661480 logs.go:282] 0 containers: []
	W0804 09:05:49.013613 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:49.013624 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:49.013644 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:49.033537 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:49.033554 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:49.086196 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:49.086216 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:49.106369 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:49.106383 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:49.141789 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:49.141805 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:49.221717 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:49.221741 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:49.276646 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:49.269311   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.269820   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.271422   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.271819   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.273274   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:49.269311   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.269820   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.271422   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.271819   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:49.273274   20526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:49.276656 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:49.276670 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:49.321356 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:49.321377 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:49.365595 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:49.365613 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:49.384099 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:49.384117 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:49.411209 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:49.411228 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:51.937395 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:51.937838 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:51.937922 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:05:51.956704 1661480 logs.go:282] 2 containers: [45dd8fe239bc a70a68ec6169]
	I0804 09:05:51.956769 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:05:51.974346 1661480 logs.go:282] 1 containers: [28a5795de0c3]
	I0804 09:05:51.974399 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:05:51.991495 1661480 logs.go:282] 0 containers: []
	W0804 09:05:51.991507 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:05:51.991549 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:05:52.011643 1661480 logs.go:282] 2 containers: [1221127d6d59 3206d43d6e58]
	I0804 09:05:52.011711 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:05:52.029478 1661480 logs.go:282] 0 containers: []
	W0804 09:05:52.029490 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:05:52.029540 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:05:52.046644 1661480 logs.go:282] 1 containers: [9d4ac6608b3c]
	I0804 09:05:52.046722 1661480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:05:52.064950 1661480 logs.go:282] 0 containers: []
	W0804 09:05:52.064963 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:05:52.064974 1661480 logs.go:123] Gathering logs for kube-scheduler [3206d43d6e58] ...
	I0804 09:05:52.064986 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3206d43d6e58"
	I0804 09:05:52.121641 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:05:52.121666 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:05:52.207435 1661480 logs.go:123] Gathering logs for kube-apiserver [45dd8fe239bc] ...
	I0804 09:05:52.207466 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45dd8fe239bc"
	I0804 09:05:52.234341 1661480 logs.go:123] Gathering logs for kube-controller-manager [9d4ac6608b3c] ...
	I0804 09:05:52.234364 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d4ac6608b3c"
	I0804 09:05:52.254927 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:05:52.254946 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:05:52.298877 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:05:52.298897 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:05:52.334848 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:05:52.334867 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:05:52.353549 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:05:52.353565 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:05:52.406664 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:05:52.399095   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.399713   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.400815   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.402371   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.402719   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:05:52.399095   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.399713   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.400815   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.402371   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:05:52.402719   20714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:05:52.406679 1661480 logs.go:123] Gathering logs for kube-apiserver [a70a68ec6169] ...
	I0804 09:05:52.406689 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a70a68ec6169"
	I0804 09:05:52.432229 1661480 logs.go:123] Gathering logs for etcd [28a5795de0c3] ...
	I0804 09:05:52.432246 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28a5795de0c3"
	I0804 09:05:52.451833 1661480 logs.go:123] Gathering logs for kube-scheduler [1221127d6d59] ...
	I0804 09:05:52.451848 1661480 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1221127d6d59"
	I0804 09:05:55.009056 1661480 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0804 09:05:55.009576 1661480 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:05:55.009639 1661480 kubeadm.go:593] duration metric: took 4m5.290563198s to restartPrimaryControlPlane
	W0804 09:05:55.009718 1661480 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0804 09:05:55.009762 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0804 09:05:55.871445 1661480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 09:05:55.882275 1661480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 09:05:55.890471 1661480 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0804 09:05:55.890520 1661480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 09:05:55.898415 1661480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 09:05:55.898428 1661480 kubeadm.go:157] found existing configuration files:
	
	I0804 09:05:55.898465 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0804 09:05:55.906151 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 09:05:55.906189 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 09:05:55.913607 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0804 09:05:55.921040 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 09:05:55.921073 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 09:05:55.928201 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0804 09:05:55.936065 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 09:05:55.936113 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 09:05:55.943534 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0804 09:05:55.951211 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 09:05:55.951253 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 09:05:55.958383 1661480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0804 09:05:55.991847 1661480 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0-beta.0
	I0804 09:05:55.991901 1661480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 09:05:56.004623 1661480 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0804 09:05:56.004692 1661480 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0804 09:05:56.004732 1661480 kubeadm.go:310] OS: Linux
	I0804 09:05:56.004768 1661480 kubeadm.go:310] CGROUPS_CPU: enabled
	I0804 09:05:56.004807 1661480 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0804 09:05:56.004862 1661480 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0804 09:05:56.004941 1661480 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0804 09:05:56.005006 1661480 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0804 09:05:56.005083 1661480 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0804 09:05:56.005137 1661480 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0804 09:05:56.005193 1661480 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0804 09:05:56.005278 1661480 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0804 09:05:56.054357 1661480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 09:05:56.054479 1661480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 09:05:56.054635 1661480 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 09:05:56.064998 1661480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 09:05:56.067952 1661480 out.go:235]   - Generating certificates and keys ...
	I0804 09:05:56.068027 1661480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 09:05:56.068074 1661480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 09:05:56.068144 1661480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 09:05:56.068209 1661480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 09:05:56.068279 1661480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 09:05:56.068322 1661480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 09:05:56.068385 1661480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 09:05:56.068433 1661480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 09:05:56.068492 1661480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 09:05:56.068549 1661480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 09:05:56.068580 1661480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 09:05:56.068624 1661480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 09:05:56.846466 1661480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 09:05:57.293494 1661480 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 09:05:57.586648 1661480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 09:05:57.707352 1661480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 09:05:58.140308 1661480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 09:05:58.141365 1661480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 09:05:58.143879 1661480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 09:05:58.146322 1661480 out.go:235]   - Booting up control plane ...
	I0804 09:05:58.146440 1661480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 09:05:58.146521 1661480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 09:05:58.146580 1661480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 09:05:58.157812 1661480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 09:05:58.157949 1661480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0804 09:05:58.163040 1661480 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0804 09:05:58.163314 1661480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 09:05:58.163387 1661480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 09:05:58.241217 1661480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 09:05:58.241378 1661480 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 09:05:59.242975 1661480 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001870906s
	I0804 09:05:59.246768 1661480 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0804 09:05:59.246925 1661480 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I0804 09:05:59.247072 1661480 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0804 09:05:59.247191 1661480 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0804 09:06:00.899560 1661480 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.652491519s
	I0804 09:06:31.896796 1661480 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 32.64974442s
	I0804 09:09:59.247676 1661480 kubeadm.go:310] [control-plane-check] kube-apiserver is not healthy after 4m0.000445769s
	I0804 09:09:59.247761 1661480 kubeadm.go:310] 
	I0804 09:09:59.247995 1661480 kubeadm.go:310] A control plane component may have crashed or exited when started by the container runtime.
	I0804 09:09:59.248237 1661480 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 09:09:59.248440 1661480 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0804 09:09:59.248589 1661480 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	I0804 09:09:59.248701 1661480 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0804 09:09:59.248843 1661480 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	I0804 09:09:59.248851 1661480 kubeadm.go:310] 
	I0804 09:09:59.251561 1661480 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0804 09:09:59.251846 1661480 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0804 09:09:59.251983 1661480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 09:09:59.252295 1661480 kubeadm.go:310] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	I0804 09:09:59.252358 1661480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0804 09:09:59.252583 1661480 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001870906s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.652491519s
	[control-plane-check] kube-scheduler is healthy after 32.64974442s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000445769s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	I0804 09:09:59.252631 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0804 09:10:00.037426 1661480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 09:10:00.048756 1661480 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0804 09:10:00.048799 1661480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 09:10:00.056703 1661480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 09:10:00.056711 1661480 kubeadm.go:157] found existing configuration files:
	
	I0804 09:10:00.056746 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0804 09:10:00.064271 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 09:10:00.064310 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 09:10:00.071720 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0804 09:10:00.079478 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 09:10:00.079512 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 09:10:00.086675 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0804 09:10:00.094268 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 09:10:00.094310 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 09:10:00.101549 1661480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0804 09:10:00.108748 1661480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 09:10:00.108780 1661480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 09:10:00.115895 1661480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0804 09:10:00.150607 1661480 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0-beta.0
	I0804 09:10:00.150679 1661480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 09:10:00.163722 1661480 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0804 09:10:00.163786 1661480 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0804 09:10:00.163846 1661480 kubeadm.go:310] OS: Linux
	I0804 09:10:00.163909 1661480 kubeadm.go:310] CGROUPS_CPU: enabled
	I0804 09:10:00.163960 1661480 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0804 09:10:00.164019 1661480 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0804 09:10:00.164060 1661480 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0804 09:10:00.164099 1661480 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0804 09:10:00.164143 1661480 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0804 09:10:00.164177 1661480 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0804 09:10:00.164213 1661480 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0804 09:10:00.164247 1661480 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0804 09:10:00.214655 1661480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 09:10:00.214804 1661480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 09:10:00.214924 1661480 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 09:10:00.225204 1661480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 09:10:00.228114 1661480 out.go:235]   - Generating certificates and keys ...
	I0804 09:10:00.228235 1661480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 09:10:00.228353 1661480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 09:10:00.228472 1661480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 09:10:00.228537 1661480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 09:10:00.228597 1661480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 09:10:00.228639 1661480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 09:10:00.228694 1661480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 09:10:00.228785 1661480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 09:10:00.228876 1661480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 09:10:00.228943 1661480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 09:10:00.228999 1661480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 09:10:00.229083 1661480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 09:10:00.330549 1661480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 09:10:00.508036 1661480 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 09:10:00.741967 1661480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 09:10:01.526835 1661480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 09:10:01.662111 1661480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 09:10:01.662652 1661480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 09:10:01.664702 1661480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 09:10:01.666272 1661480 out.go:235]   - Booting up control plane ...
	I0804 09:10:01.666353 1661480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 09:10:01.666413 1661480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 09:10:01.667084 1661480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 09:10:01.679192 1661480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 09:10:01.679268 1661480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0804 09:10:01.684800 1661480 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0804 09:10:01.685864 1661480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 09:10:01.685922 1661480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 09:10:01.773321 1661480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 09:10:01.773477 1661480 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 09:10:02.774854 1661480 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001670583s
	I0804 09:10:02.777450 1661480 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0804 09:10:02.777542 1661480 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I0804 09:10:02.777645 1661480 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0804 09:10:02.777709 1661480 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0804 09:10:06.220867 1661480 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.44333807s
	I0804 09:10:36.606673 1661480 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 33.829135405s
	I0804 09:14:02.777907 1661480 kubeadm.go:310] [control-plane-check] kube-apiserver is not healthy after 4m0.000246349s
	I0804 09:14:02.777973 1661480 kubeadm.go:310] 
	I0804 09:14:02.778102 1661480 kubeadm.go:310] A control plane component may have crashed or exited when started by the container runtime.
	I0804 09:14:02.778204 1661480 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 09:14:02.778303 1661480 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0804 09:14:02.778415 1661480 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	I0804 09:14:02.778499 1661480 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0804 09:14:02.778604 1661480 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	I0804 09:14:02.778614 1661480 kubeadm.go:310] 
	I0804 09:14:02.781964 1661480 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0804 09:14:02.782147 1661480 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0804 09:14:02.782232 1661480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 09:14:02.782512 1661480 kubeadm.go:310] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I0804 09:14:02.782622 1661480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 09:14:02.782672 1661480 kubeadm.go:394] duration metric: took 12m13.088610065s to StartCluster
	I0804 09:14:02.782740 1661480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 09:14:02.782800 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 09:14:02.821166 1661480 cri.go:89] found id: "c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e"
	I0804 09:14:02.821177 1661480 cri.go:89] found id: ""
	I0804 09:14:02.821190 1661480 logs.go:282] 1 containers: [c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e]
	I0804 09:14:02.821273 1661480 ssh_runner.go:195] Run: which crictl
	I0804 09:14:02.824824 1661480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 09:14:02.824881 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 09:14:02.861272 1661480 cri.go:89] found id: "0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1"
	I0804 09:14:02.861286 1661480 cri.go:89] found id: ""
	I0804 09:14:02.861293 1661480 logs.go:282] 1 containers: [0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1]
	I0804 09:14:02.861334 1661480 ssh_runner.go:195] Run: which crictl
	I0804 09:14:02.864640 1661480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 09:14:02.864684 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 09:14:02.896631 1661480 cri.go:89] found id: ""
	I0804 09:14:02.896648 1661480 logs.go:282] 0 containers: []
	W0804 09:14:02.896654 1661480 logs.go:284] No container was found matching "coredns"
	I0804 09:14:02.896660 1661480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 09:14:02.896720 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 09:14:02.929947 1661480 cri.go:89] found id: "ab71ff54628ca4f3cc1b1899a47413213d9243417fab01b5da5600c18c93458e"
	I0804 09:14:02.929961 1661480 cri.go:89] found id: ""
	I0804 09:14:02.929970 1661480 logs.go:282] 1 containers: [ab71ff54628ca4f3cc1b1899a47413213d9243417fab01b5da5600c18c93458e]
	I0804 09:14:02.930026 1661480 ssh_runner.go:195] Run: which crictl
	I0804 09:14:02.933377 1661480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 09:14:02.933429 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 09:14:02.966936 1661480 cri.go:89] found id: ""
	I0804 09:14:02.966951 1661480 logs.go:282] 0 containers: []
	W0804 09:14:02.966958 1661480 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:14:02.966962 1661480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 09:14:02.967020 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 09:14:02.998599 1661480 cri.go:89] found id: "19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec"
	I0804 09:14:02.998613 1661480 cri.go:89] found id: ""
	I0804 09:14:02.998622 1661480 logs.go:282] 1 containers: [19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec]
	I0804 09:14:02.998668 1661480 ssh_runner.go:195] Run: which crictl
	I0804 09:14:03.002053 1661480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 09:14:03.002114 1661480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 09:14:03.033926 1661480 cri.go:89] found id: ""
	I0804 09:14:03.033944 1661480 logs.go:282] 0 containers: []
	W0804 09:14:03.033953 1661480 logs.go:284] No container was found matching "kindnet"
	I0804 09:14:03.033973 1661480 logs.go:123] Gathering logs for dmesg ...
	I0804 09:14:03.033985 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:14:03.052185 1661480 logs.go:123] Gathering logs for kube-scheduler [ab71ff54628ca4f3cc1b1899a47413213d9243417fab01b5da5600c18c93458e] ...
	I0804 09:14:03.052200 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab71ff54628ca4f3cc1b1899a47413213d9243417fab01b5da5600c18c93458e"
	I0804 09:14:03.109809 1661480 logs.go:123] Gathering logs for kube-controller-manager [19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec] ...
	I0804 09:14:03.109829 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec"
	I0804 09:14:03.144087 1661480 logs.go:123] Gathering logs for Docker ...
	I0804 09:14:03.144103 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:14:03.194929 1661480 logs.go:123] Gathering logs for container status ...
	I0804 09:14:03.194949 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:14:03.230465 1661480 logs.go:123] Gathering logs for kubelet ...
	I0804 09:14:03.230483 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:14:03.308846 1661480 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:14:03.308871 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:14:03.364644 1661480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:14:03.357491   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.358045   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.359651   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.360110   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.361657   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 09:14:03.357491   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.358045   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.359651   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.360110   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:03.361657   24940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:14:03.364660 1661480 logs.go:123] Gathering logs for kube-apiserver [c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e] ...
	I0804 09:14:03.364672 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e"
	I0804 09:14:03.404334 1661480 logs.go:123] Gathering logs for etcd [0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1] ...
	I0804 09:14:03.404352 1661480 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1"
	W0804 09:14:03.438012 1661480 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001670583s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 3.44333807s
	[control-plane-check] kube-scheduler is healthy after 33.829135405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000246349s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	W0804 09:14:03.438066 1661480 out.go:270] * 
	W0804 09:14:03.438175 1661480 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001670583s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 3.44333807s
	[control-plane-check] kube-scheduler is healthy after 33.829135405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000246349s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 09:14:03.438197 1661480 out.go:270] * 
	W0804 09:14:03.440048 1661480 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 09:14:03.443944 1661480 out.go:201] 
	W0804 09:14:03.444897 1661480 out.go:270] X Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001670583s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 3.44333807s
	[control-plane-check] kube-scheduler is healthy after 33.829135405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000246349s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 09:14:03.444921 1661480 out.go:270] * 
	W0804 09:14:03.446546 1661480 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 09:14:03.447852 1661480 out.go:201] 
	
	
	==> Docker <==
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.787995733Z" level=info msg="ignoring event" container=f1bd416cdc841c08268e4a5cc39ad5a59cc0a90b637768c23bba55fc61dfe5c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.834457529Z" level=info msg="ignoring event" container=e5c110c6a30cdc8999b8b044af4d1ddbb8d18f91cb064a1ebe54d22157751829 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.885743027Z" level=info msg="ignoring event" container=e13433a1e498749e89b61d95e4e808ac592ff0f1590fa6a6796cb547fa62b353 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.942900152Z" level=info msg="ignoring event" container=0dbe96ba02a76e8c83b519e0f5e45430250b1274660db94c7535b17780b8b6a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.996443176Z" level=info msg="ignoring event" container=65a02a714ffa74a76d877f2f692a10085ec7c8de0a017440b9efab00ad27e971 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d4d4b2be5907ada8d86373ea4112563c2759616d61b4a3818a35c5e172d53a14/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c3e3744dc769f21f2dd24654e1beecb6bfea7f8fdbb934aece5c0de776222793/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b2655ec5482c692bf93620fb4f296ae1f6e6322e8ac4d9bc5b6eb4deb7959758/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3a21deea3bd6d0ed2e1f870c1f36ae32ec63d20d02b5d6f7c0acfdbaa8f8b941/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 dockerd[11071]: time="2025-08-04T09:10:03.575810667Z" level=info msg="ignoring event" container=b425fd9606261cc933d38c743338a7166df00b74150ec90a06efaa88ed8fc7b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:04 functional-699837 dockerd[11071]: time="2025-08-04T09:10:04.004048987Z" level=info msg="ignoring event" container=6405868ef96be39062f80dc7747b60785a54bddc511237239054e6857dfb60f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:25 functional-699837 dockerd[11071]: time="2025-08-04T09:10:25.604145123Z" level=info msg="ignoring event" container=fa805a11775898f3d55fe7aac1621ef34f65e4c5d265b91d14f1aac398eb73e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:25 functional-699837 dockerd[11071]: time="2025-08-04T09:10:25.760949608Z" level=info msg="ignoring event" container=f96509d0b4a5c44670e00704a788094c91d7b771e339e28bcbb4c72c5b3337f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:46 functional-699837 dockerd[11071]: time="2025-08-04T09:10:46.592786531Z" level=info msg="ignoring event" container=f4baa19e4e176c92972f5c522b74a59ccb787659ec18793a2507e5f3eb51c18e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:47 functional-699837 dockerd[11071]: time="2025-08-04T09:10:47.616507681Z" level=info msg="ignoring event" container=25c1c03e2a156d302903662e106257ad86e1a932fc60405f41533a9012305264 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:55 functional-699837 dockerd[11071]: time="2025-08-04T09:10:55.761109664Z" level=info msg="ignoring event" container=c26a4a47aeb6e114017bda7b18b81d29e691be9cb646b2d0563767522b4243e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:59 functional-699837 dockerd[11071]: time="2025-08-04T09:10:59.048340949Z" level=info msg="ignoring event" container=5782c2a66cdd131809b7afdb2a669ecdc6104e397476ab6668c189dd853d9135 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:23 functional-699837 dockerd[11071]: time="2025-08-04T09:11:23.680443620Z" level=info msg="ignoring event" container=8b79556a690891c36a658f03ea970153fdb49c95eddd24f9241c3648decbc9ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:34 functional-699837 dockerd[11071]: time="2025-08-04T09:11:34.704315507Z" level=info msg="ignoring event" container=b2c8622eb896520d559e06ff8656f4690c8183e99d4c298a76889fb2e1f0ebf7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:41 functional-699837 dockerd[11071]: time="2025-08-04T09:11:41.762186466Z" level=info msg="ignoring event" container=bc29e58366f3b736cc21b6d0cc45970040b105936cf9045300d75e3e3fc5a723 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:12:15 functional-699837 dockerd[11071]: time="2025-08-04T09:12:15.453114207Z" level=info msg="ignoring event" container=9fa5f5eeba93beb44bb9b23ec48553aaea94d0f30b5d2c53f2f15b77b1d7977c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:12:26 functional-699837 dockerd[11071]: time="2025-08-04T09:12:26.472269528Z" level=info msg="ignoring event" container=91a0d13be39f38898491d381b24367c6e8aed57bbdcaf093ac956972d4c853ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:07 functional-699837 dockerd[11071]: time="2025-08-04T09:13:07.763715484Z" level=info msg="ignoring event" container=0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:21 functional-699837 dockerd[11071]: time="2025-08-04T09:13:21.094277794Z" level=info msg="ignoring event" container=c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:29 functional-699837 dockerd[11071]: time="2025-08-04T09:13:29.764267638Z" level=info msg="ignoring event" container=19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	19b815a4b1b28       9ad783615e1bc       About a minute ago   Exited              kube-controller-manager   4                   b2655ec5482c6       kube-controller-manager-functional-699837
	0e5a036fd8651       1e30c0b1e9b99       About a minute ago   Exited              etcd                      5                   d4d4b2be5907a       etcd-functional-699837
	c9537e09fe59d       d85eea91cc41d       About a minute ago   Exited              kube-apiserver            4                   c3e3744dc769f       kube-apiserver-functional-699837
	ab71ff54628ca       21d34a2aeacf5       4 minutes ago        Running             kube-scheduler            0                   3a21deea3bd6d       kube-scheduler-functional-699837
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:14:11.945566   26173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:11.946150   26173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:11.948549   26173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:11.949074   26173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:11.950760   26173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000488] IPv4: martian source 10.244.0.33 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[  +0.000590] IPv4: martian source 10.244.0.33 from 10.244.0.7, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ee 17 d6 72 58 d4 08 06
	[ +20.425373] IPv4: martian source 10.244.0.36 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 2e 04 ae c5 a3 08 06
	[  +0.708699] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[Aug 4 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 4d a6 d6 4c 9f 08 06
	[Aug 4 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 38 7f 58 31 63 08 06
	[ +30.193533] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 b7 61 9c 47 84 08 06
	[Aug 4 08:45] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a d0 26 e8 7c d1 08 06
	[Aug 4 08:46] FS-Cache: Duplicate cookie detected
	[  +0.004807] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006832] FS-Cache: O-cookie d=000000003739c6e4{9P.session} n=000000001b482ea5
	[  +0.007607] FS-Cache: O-key=[10] '34333332323039333239'
	[  +0.005436] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006682] FS-Cache: N-cookie d=000000003739c6e4{9P.session} n=00000000e0b3994b
	[  +0.007609] FS-Cache: N-key=[10] '34333332323039333239'
	[  +5.882110] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 55 4a ac 47 cd 08 06
	
	
	==> etcd [0e5a036fd865] <==
	flag provided but not defined: -proxy-refresh-interval
	Usage:
	
	  etcd [flags]
	    Start an etcd server.
	
	  etcd --version
	    Show the version of etcd.
	
	  etcd -h | --help
	    Show the help information about etcd.
	
	  etcd --config-file
	    Path to the server configuration file. Note that if a configuration file is provided, other command line flags and environment variables will be ignored.
	
	  etcd gateway
	    Run the stateless pass-through etcd TCP connection forwarding proxy.
	
	  etcd grpc-proxy
	    Run the stateless etcd v3 gRPC L7 reverse proxy.
	
	
	
	==> kernel <==
	 09:14:12 up 1 day, 17:55,  0 users,  load average: 0.11, 0.11, 0.23
	Linux functional-699837 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [c9537e09fe59] <==
	W0804 09:13:01.062968       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0804 09:13:01.063080       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 09:13:01.064364       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 09:13:01.072243       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 09:13:01.077057       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceAutoProvision,NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 09:13:01.077076       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 09:13:01.077355       1 instance.go:232] Using reconciler: lease
	W0804 09:13:01.078152       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0804 09:13:01.078183       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.064385       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.064386       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.079065       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.556302       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.764969       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.836811       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:05.764628       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:06.271423       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:06.558313       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:09.120366       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:10.991226       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:11.100603       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:15.082522       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:16.616538       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:18.138507       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 09:13:21.078676       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [19b815a4b1b2] <==
	I0804 09:13:09.096379       1 serving.go:386] Generated self-signed cert in-memory
	I0804 09:13:09.725784       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 09:13:09.725823       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 09:13:09.727763       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 09:13:09.727831       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 09:13:09.728078       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 09:13:09.728188       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 09:13:29.730720       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-scheduler [ab71ff54628c] <==
	E0804 09:13:15.121795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:13:17.161677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:13:18.600381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43972->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:44002->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43978->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43960->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:13:22.083981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:44032->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:13:22.084172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43986->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:13:22.585066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 09:13:26.210416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 09:13:27.295821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 09:13:34.688522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 09:13:37.031049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:13:45.713447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:13:49.362723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 09:13:54.296326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:13:55.421665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:13:56.863265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:13:57.488174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:13:59.236047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:14:03.694972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 09:14:07.280269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:14:08.128547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:14:10.109602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	
	
	==> kubelet <==
	Aug 04 09:13:57 functional-699837 kubelet[23032]: I0804 09:13:57.633505   23032 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:13:57 functional-699837 kubelet[23032]: E0804 09:13:57.633818   23032 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:13:57 functional-699837 kubelet[23032]: E0804 09:13:57.643831   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:13:57 functional-699837 kubelet[23032]: I0804 09:13:57.643903   23032 scope.go:117] "RemoveContainer" containerID="0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1"
	Aug 04 09:13:57 functional-699837 kubelet[23032]: E0804 09:13:57.644026   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-functional-699837_kube-system(33b890b5c0b95f8eaa124c566a17ff33)\"" pod="kube-system/etcd-functional-699837" podUID="33b890b5c0b95f8eaa124c566a17ff33"
	Aug 04 09:13:58 functional-699837 kubelet[23032]: E0804 09:13:58.609095   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:13:59 functional-699837 kubelet[23032]: E0804 09:13:59.432444   23032 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Aug 04 09:14:02 functional-699837 kubelet[23032]: E0804 09:14:02.693365   23032 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: E0804 09:14:03.644142   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: I0804 09:14:03.644222   23032 scope.go:117] "RemoveContainer" containerID="19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: E0804 09:14:03.644365   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-699837_kube-system(ed0b2fd0bf6ad62500e8494ab79d1a1a)\"" pod="kube-system/kube-controller-manager-functional-699837" podUID="ed0b2fd0bf6ad62500e8494ab79d1a1a"
	Aug 04 09:14:04 functional-699837 kubelet[23032]: I0804 09:14:04.635636   23032 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:14:04 functional-699837 kubelet[23032]: E0804 09:14:04.636090   23032 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:14:05 functional-699837 kubelet[23032]: E0804 09:14:05.350524   23032 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588548cf9cd04c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,LastTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:14:05 functional-699837 kubelet[23032]: E0804 09:14:05.610218   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:14:08 functional-699837 kubelet[23032]: E0804 09:14:08.644074   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:08 functional-699837 kubelet[23032]: I0804 09:14:08.644186   23032 scope.go:117] "RemoveContainer" containerID="0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1"
	Aug 04 09:14:08 functional-699837 kubelet[23032]: E0804 09:14:08.644380   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-functional-699837_kube-system(33b890b5c0b95f8eaa124c566a17ff33)\"" pod="kube-system/etcd-functional-699837" podUID="33b890b5c0b95f8eaa124c566a17ff33"
	Aug 04 09:14:10 functional-699837 kubelet[23032]: E0804 09:14:10.643561   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:10 functional-699837 kubelet[23032]: I0804 09:14:10.643671   23032 scope.go:117] "RemoveContainer" containerID="c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e"
	Aug 04 09:14:10 functional-699837 kubelet[23032]: E0804 09:14:10.643844   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-699837_kube-system(cc94200f18453b93e8d420d475923a00)\"" pod="kube-system/kube-apiserver-functional-699837" podUID="cc94200f18453b93e8d420d475923a00"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: E0804 09:14:11.218396   23032 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: I0804 09:14:11.637647   23032 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: E0804 09:14:11.638029   23032 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: E0804 09:14:11.997440   23032 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837: exit status 2 (323.541963ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-699837" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/StatusCmd (2.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmdConnect (1.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-699837 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1646: (dbg) Non-zero exit: kubectl --context functional-699837 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8: exit status 1 (51.173251ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1650: failed to create hello-node deployment with this command "kubectl --context functional-699837 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8": exit status 1.
functional_test.go:1615: service test failed - dumping debug information
functional_test.go:1616: -----------------------service failure post-mortem--------------------------------
functional_test.go:1619: (dbg) Run:  kubectl --context functional-699837 describe po hello-node-connect
functional_test.go:1619: (dbg) Non-zero exit: kubectl --context functional-699837 describe po hello-node-connect: exit status 1 (53.230159ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1621: "kubectl --context functional-699837 describe po hello-node-connect" failed: exit status 1
functional_test.go:1623: hello-node pod describe:
functional_test.go:1625: (dbg) Run:  kubectl --context functional-699837 logs -l app=hello-node-connect
functional_test.go:1625: (dbg) Non-zero exit: kubectl --context functional-699837 logs -l app=hello-node-connect: exit status 1 (52.994522ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1627: "kubectl --context functional-699837 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1629: hello-node logs:
functional_test.go:1631: (dbg) Run:  kubectl --context functional-699837 describe svc hello-node-connect
functional_test.go:1631: (dbg) Non-zero exit: kubectl --context functional-699837 describe svc hello-node-connect: exit status 1 (52.469855ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1633: "kubectl --context functional-699837 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1635: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-699837
helpers_test.go:235: (dbg) docker inspect functional-699837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	        "Created": "2025-08-04T08:46:45.45274172Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1645232,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T08:46:45.480784715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hosts",
	        "LogPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef-json.log",
	        "Name": "/functional-699837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-699837:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-699837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	                "LowerDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/merged",
	                "UpperDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/diff",
	                "WorkDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-699837",
	                "Source": "/var/lib/docker/volumes/functional-699837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-699837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-699837",
	                "name.minikube.sigs.k8s.io": "functional-699837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "28a81d3856c88da8c1d30d5c1cccd74ba2a899c3397b78caf0ac9da484142038",
	            "SandboxKey": "/var/run/docker/netns/28a81d3856c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-699837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:c5:9a:18:f2:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "763070d9e7bba0803db69bf71eb608d56921d0bfd4c71a1d39d0701f7372b87c",
	                    "EndpointID": "83493e8c17b59326d8c479c2c0d7a5ded2cae3362a881c1ce8347b3f751ead15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-699837",
	                        "c369b96e23d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837: exit status 2 (308.653364ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 logs -n 25
helpers_test.go:252: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                                          ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ dashboard  │ --url --port 36195 -p functional-699837 --alsologtostderr -v=1                                                                                         │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ mount      │ -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdVerifyCleanup352349839/001:/mount1 --alsologtostderr -v=1 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh        │ functional-699837 ssh findmnt -T /mount1                                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ mount      │ -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdVerifyCleanup352349839/001:/mount2 --alsologtostderr -v=1 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ mount      │ -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdVerifyCleanup352349839/001:/mount3 --alsologtostderr -v=1 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh        │ functional-699837 ssh sudo cat /etc/ssl/certs/1582690.pem                                                                                              │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image      │ functional-699837 image ls                                                                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh        │ functional-699837 ssh findmnt -T /mount2                                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh        │ functional-699837 ssh sudo cat /usr/share/ca-certificates/1582690.pem                                                                                  │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh        │ functional-699837 ssh findmnt -T /mount3                                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh        │ functional-699837 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ mount      │ -p functional-699837 --kill=true                                                                                                                       │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ docker-env │ functional-699837 docker-env                                                                                                                           │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh        │ functional-699837 ssh sudo cat /etc/ssl/certs/15826902.pem                                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh        │ functional-699837 ssh sudo cat /usr/share/ca-certificates/15826902.pem                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh        │ functional-699837 ssh sudo cat /etc/test/nested/copy/1582690/hosts                                                                                     │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh        │ functional-699837 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ addons     │ functional-699837 addons list                                                                                                                          │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ addons     │ functional-699837 addons list -o json                                                                                                                  │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh        │ functional-699837 ssh echo hello                                                                                                                       │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh        │ functional-699837 ssh cat /etc/hostname                                                                                                                │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image      │ functional-699837 image load --daemon kicbase/echo-server:functional-699837 --alsologtostderr                                                          │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image      │ functional-699837 image ls                                                                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ tunnel     │ functional-699837 tunnel --alsologtostderr                                                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ tunnel     │ functional-699837 tunnel --alsologtostderr                                                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	└────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 09:14:12
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 09:14:12.992327 1684525 out.go:345] Setting OutFile to fd 1 ...
	I0804 09:14:12.992632 1684525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:14:12.992647 1684525 out.go:358] Setting ErrFile to fd 2...
	I0804 09:14:12.992653 1684525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:14:12.992985 1684525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 09:14:12.993729 1684525 out.go:352] Setting JSON to false
	I0804 09:14:12.995013 1684525 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":150942,"bootTime":1754147911,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 09:14:12.995107 1684525 start.go:140] virtualization: kvm guest
	I0804 09:14:12.997234 1684525 out.go:177] * [functional-699837] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 09:14:12.998435 1684525 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 09:14:12.998495 1684525 notify.go:220] Checking for updates...
	I0804 09:14:13.000523 1684525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 09:14:13.001833 1684525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 09:14:13.003094 1684525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 09:14:13.004247 1684525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 09:14:13.005485 1684525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 09:14:13.006929 1684525 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:14:13.007672 1684525 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 09:14:13.037008 1684525 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 09:14:13.037170 1684525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:14:13.108391 1684525 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:58 SystemTime:2025-08-04 09:14:13.099283492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:14:13.108492 1684525 docker.go:318] overlay module found
	I0804 09:14:13.109830 1684525 out.go:177] * Using the docker driver based on existing profile
	I0804 09:14:13.110806 1684525 start.go:304] selected driver: docker
	I0804 09:14:13.110821 1684525 start.go:918] validating driver "docker" against &{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:14:13.110918 1684525 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 09:14:13.111010 1684525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:14:13.174998 1684525 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:57 SystemTime:2025-08-04 09:14:13.163491877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:14:13.175928 1684525 cni.go:84] Creating CNI manager for ""
	I0804 09:14:13.176003 1684525 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:14:13.176058 1684525 start.go:348] cluster config:
	{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:14:13.178622 1684525 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.787995733Z" level=info msg="ignoring event" container=f1bd416cdc841c08268e4a5cc39ad5a59cc0a90b637768c23bba55fc61dfe5c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.834457529Z" level=info msg="ignoring event" container=e5c110c6a30cdc8999b8b044af4d1ddbb8d18f91cb064a1ebe54d22157751829 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.885743027Z" level=info msg="ignoring event" container=e13433a1e498749e89b61d95e4e808ac592ff0f1590fa6a6796cb547fa62b353 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.942900152Z" level=info msg="ignoring event" container=0dbe96ba02a76e8c83b519e0f5e45430250b1274660db94c7535b17780b8b6a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.996443176Z" level=info msg="ignoring event" container=65a02a714ffa74a76d877f2f692a10085ec7c8de0a017440b9efab00ad27e971 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d4d4b2be5907ada8d86373ea4112563c2759616d61b4a3818a35c5e172d53a14/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c3e3744dc769f21f2dd24654e1beecb6bfea7f8fdbb934aece5c0de776222793/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b2655ec5482c692bf93620fb4f296ae1f6e6322e8ac4d9bc5b6eb4deb7959758/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3a21deea3bd6d0ed2e1f870c1f36ae32ec63d20d02b5d6f7c0acfdbaa8f8b941/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 dockerd[11071]: time="2025-08-04T09:10:03.575810667Z" level=info msg="ignoring event" container=b425fd9606261cc933d38c743338a7166df00b74150ec90a06efaa88ed8fc7b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:04 functional-699837 dockerd[11071]: time="2025-08-04T09:10:04.004048987Z" level=info msg="ignoring event" container=6405868ef96be39062f80dc7747b60785a54bddc511237239054e6857dfb60f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:25 functional-699837 dockerd[11071]: time="2025-08-04T09:10:25.604145123Z" level=info msg="ignoring event" container=fa805a11775898f3d55fe7aac1621ef34f65e4c5d265b91d14f1aac398eb73e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:25 functional-699837 dockerd[11071]: time="2025-08-04T09:10:25.760949608Z" level=info msg="ignoring event" container=f96509d0b4a5c44670e00704a788094c91d7b771e339e28bcbb4c72c5b3337f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:46 functional-699837 dockerd[11071]: time="2025-08-04T09:10:46.592786531Z" level=info msg="ignoring event" container=f4baa19e4e176c92972f5c522b74a59ccb787659ec18793a2507e5f3eb51c18e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:47 functional-699837 dockerd[11071]: time="2025-08-04T09:10:47.616507681Z" level=info msg="ignoring event" container=25c1c03e2a156d302903662e106257ad86e1a932fc60405f41533a9012305264 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:55 functional-699837 dockerd[11071]: time="2025-08-04T09:10:55.761109664Z" level=info msg="ignoring event" container=c26a4a47aeb6e114017bda7b18b81d29e691be9cb646b2d0563767522b4243e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:59 functional-699837 dockerd[11071]: time="2025-08-04T09:10:59.048340949Z" level=info msg="ignoring event" container=5782c2a66cdd131809b7afdb2a669ecdc6104e397476ab6668c189dd853d9135 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:23 functional-699837 dockerd[11071]: time="2025-08-04T09:11:23.680443620Z" level=info msg="ignoring event" container=8b79556a690891c36a658f03ea970153fdb49c95eddd24f9241c3648decbc9ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:34 functional-699837 dockerd[11071]: time="2025-08-04T09:11:34.704315507Z" level=info msg="ignoring event" container=b2c8622eb896520d559e06ff8656f4690c8183e99d4c298a76889fb2e1f0ebf7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:41 functional-699837 dockerd[11071]: time="2025-08-04T09:11:41.762186466Z" level=info msg="ignoring event" container=bc29e58366f3b736cc21b6d0cc45970040b105936cf9045300d75e3e3fc5a723 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:12:15 functional-699837 dockerd[11071]: time="2025-08-04T09:12:15.453114207Z" level=info msg="ignoring event" container=9fa5f5eeba93beb44bb9b23ec48553aaea94d0f30b5d2c53f2f15b77b1d7977c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:12:26 functional-699837 dockerd[11071]: time="2025-08-04T09:12:26.472269528Z" level=info msg="ignoring event" container=91a0d13be39f38898491d381b24367c6e8aed57bbdcaf093ac956972d4c853ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:07 functional-699837 dockerd[11071]: time="2025-08-04T09:13:07.763715484Z" level=info msg="ignoring event" container=0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:21 functional-699837 dockerd[11071]: time="2025-08-04T09:13:21.094277794Z" level=info msg="ignoring event" container=c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:29 functional-699837 dockerd[11071]: time="2025-08-04T09:13:29.764267638Z" level=info msg="ignoring event" container=19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	19b815a4b1b28       9ad783615e1bc       About a minute ago   Exited              kube-controller-manager   4                   b2655ec5482c6       kube-controller-manager-functional-699837
	0e5a036fd8651       1e30c0b1e9b99       About a minute ago   Exited              etcd                      5                   d4d4b2be5907a       etcd-functional-699837
	c9537e09fe59d       d85eea91cc41d       About a minute ago   Exited              kube-apiserver            4                   c3e3744dc769f       kube-apiserver-functional-699837
	ab71ff54628ca       21d34a2aeacf5       4 minutes ago        Running             kube-scheduler            0                   3a21deea3bd6d       kube-scheduler-functional-699837
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:14:17.014173   27214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:17.014703   27214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:17.016249   27214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:17.016624   27214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:17.018208   27214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000488] IPv4: martian source 10.244.0.33 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[  +0.000590] IPv4: martian source 10.244.0.33 from 10.244.0.7, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ee 17 d6 72 58 d4 08 06
	[ +20.425373] IPv4: martian source 10.244.0.36 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 2e 04 ae c5 a3 08 06
	[  +0.708699] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[Aug 4 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 4d a6 d6 4c 9f 08 06
	[Aug 4 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 38 7f 58 31 63 08 06
	[ +30.193533] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 b7 61 9c 47 84 08 06
	[Aug 4 08:45] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a d0 26 e8 7c d1 08 06
	[Aug 4 08:46] FS-Cache: Duplicate cookie detected
	[  +0.004807] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006832] FS-Cache: O-cookie d=000000003739c6e4{9P.session} n=000000001b482ea5
	[  +0.007607] FS-Cache: O-key=[10] '34333332323039333239'
	[  +0.005436] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006682] FS-Cache: N-cookie d=000000003739c6e4{9P.session} n=00000000e0b3994b
	[  +0.007609] FS-Cache: N-key=[10] '34333332323039333239'
	[  +5.882110] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 55 4a ac 47 cd 08 06
	
	
	==> etcd [0e5a036fd865] <==
	flag provided but not defined: -proxy-refresh-interval
	Usage:
	
	  etcd [flags]
	    Start an etcd server.
	
	  etcd --version
	    Show the version of etcd.
	
	  etcd -h | --help
	    Show the help information about etcd.
	
	  etcd --config-file
	    Path to the server configuration file. Note that if a configuration file is provided, other command line flags and environment variables will be ignored.
	
	  etcd gateway
	    Run the stateless pass-through etcd TCP connection forwarding proxy.
	
	  etcd grpc-proxy
	    Run the stateless etcd v3 gRPC L7 reverse proxy.
	
	
	
	==> kernel <==
	 09:14:17 up 1 day, 17:55,  0 users,  load average: 0.42, 0.17, 0.25
	Linux functional-699837 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [c9537e09fe59] <==
	W0804 09:13:01.062968       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0804 09:13:01.063080       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 09:13:01.064364       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 09:13:01.072243       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 09:13:01.077057       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceAutoProvision,NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 09:13:01.077076       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 09:13:01.077355       1 instance.go:232] Using reconciler: lease
	W0804 09:13:01.078152       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0804 09:13:01.078183       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.064385       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.064386       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.079065       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.556302       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.764969       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.836811       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:05.764628       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:06.271423       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:06.558313       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:09.120366       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:10.991226       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:11.100603       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:15.082522       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:16.616538       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:18.138507       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 09:13:21.078676       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [19b815a4b1b2] <==
	I0804 09:13:09.096379       1 serving.go:386] Generated self-signed cert in-memory
	I0804 09:13:09.725784       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 09:13:09.725823       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 09:13:09.727763       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 09:13:09.727831       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 09:13:09.728078       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 09:13:09.728188       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 09:13:29.730720       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-scheduler [ab71ff54628c] <==
	E0804 09:13:15.121795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:13:17.161677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:13:18.600381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43972->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:44002->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43978->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43960->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:13:22.083981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:44032->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:13:22.084172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43986->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:13:22.585066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 09:13:26.210416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 09:13:27.295821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 09:13:34.688522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 09:13:37.031049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:13:45.713447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:13:49.362723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 09:13:54.296326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:13:55.421665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:13:56.863265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:13:57.488174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:13:59.236047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:14:03.694972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 09:14:07.280269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:14:08.128547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:14:10.109602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	
	
	==> kubelet <==
	Aug 04 09:14:03 functional-699837 kubelet[23032]: E0804 09:14:03.644142   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: I0804 09:14:03.644222   23032 scope.go:117] "RemoveContainer" containerID="19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: E0804 09:14:03.644365   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-699837_kube-system(ed0b2fd0bf6ad62500e8494ab79d1a1a)\"" pod="kube-system/kube-controller-manager-functional-699837" podUID="ed0b2fd0bf6ad62500e8494ab79d1a1a"
	Aug 04 09:14:04 functional-699837 kubelet[23032]: I0804 09:14:04.635636   23032 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:14:04 functional-699837 kubelet[23032]: E0804 09:14:04.636090   23032 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:14:05 functional-699837 kubelet[23032]: E0804 09:14:05.350524   23032 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588548cf9cd04c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,LastTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:14:05 functional-699837 kubelet[23032]: E0804 09:14:05.610218   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:14:08 functional-699837 kubelet[23032]: E0804 09:14:08.644074   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:08 functional-699837 kubelet[23032]: I0804 09:14:08.644186   23032 scope.go:117] "RemoveContainer" containerID="0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1"
	Aug 04 09:14:08 functional-699837 kubelet[23032]: E0804 09:14:08.644380   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-functional-699837_kube-system(33b890b5c0b95f8eaa124c566a17ff33)\"" pod="kube-system/etcd-functional-699837" podUID="33b890b5c0b95f8eaa124c566a17ff33"
	Aug 04 09:14:10 functional-699837 kubelet[23032]: E0804 09:14:10.643561   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:10 functional-699837 kubelet[23032]: I0804 09:14:10.643671   23032 scope.go:117] "RemoveContainer" containerID="c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e"
	Aug 04 09:14:10 functional-699837 kubelet[23032]: E0804 09:14:10.643844   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-699837_kube-system(cc94200f18453b93e8d420d475923a00)\"" pod="kube-system/kube-apiserver-functional-699837" podUID="cc94200f18453b93e8d420d475923a00"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: E0804 09:14:11.218396   23032 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: I0804 09:14:11.637647   23032 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: E0804 09:14:11.638029   23032 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: E0804 09:14:11.997440   23032 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Aug 04 09:14:12 functional-699837 kubelet[23032]: E0804 09:14:12.610748   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:14:12 functional-699837 kubelet[23032]: E0804 09:14:12.694152   23032 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	Aug 04 09:14:14 functional-699837 kubelet[23032]: E0804 09:14:14.644071   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:14 functional-699837 kubelet[23032]: I0804 09:14:14.644181   23032 scope.go:117] "RemoveContainer" containerID="19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec"
	Aug 04 09:14:14 functional-699837 kubelet[23032]: E0804 09:14:14.644371   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-699837_kube-system(ed0b2fd0bf6ad62500e8494ab79d1a1a)\"" pod="kube-system/kube-controller-manager-functional-699837" podUID="ed0b2fd0bf6ad62500e8494ab79d1a1a"
	Aug 04 09:14:15 functional-699837 kubelet[23032]: E0804 09:14:15.238542   23032 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Aug 04 09:14:15 functional-699837 kubelet[23032]: E0804 09:14:15.351513   23032 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588548cf9cd04c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,LastTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:14:16 functional-699837 kubelet[23032]: E0804 09:14:16.383674   23032 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-699837&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837: exit status 2 (271.314246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-699837" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmdConnect (1.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim (241.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I0804 09:14:26.660342 1582690 retry.go:31] will retry after 1.585736249s: Temporary Error: Get "http://10.107.4.181": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I0804 09:14:38.246606 1582690 retry.go:31] will retry after 5.199786562s: Temporary Error: Get "http://10.107.4.181": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I0804 09:14:53.448195 1582690 retry.go:31] will retry after 3.551388564s: Temporary Error: Get "http://10.107.4.181": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": net/http: TLS handshake timeout
I0804 09:15:07.001365 1582690 retry.go:31] will retry after 7.696382159s: Temporary Error: Get "http://10.107.4.181": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": net/http: TLS handshake timeout
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:46380->192.168.49.2:8441: read: connection reset by peer
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I0804 09:15:24.698596 1582690 retry.go:31] will retry after 12.7578472s: Temporary Error: Get "http://10.107.4.181": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E0804 09:15:41.677845 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I0804 09:15:47.457215 1582690 retry.go:31] will retry after 15.516560413s: Temporary Error: Get "http://10.107.4.181": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E0804 09:17:06.565232 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": net/http: TLS handshake timeout
helpers_test.go:329: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: read tcp 192.168.49.1:54476->192.168.49.2:8441: read: connection reset by peer
functional_test_pvc_test.go:44: ***** TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:44: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837
functional_test_pvc_test.go:44: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837: exit status 2 (266.519356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:44: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:44: "functional-699837" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-699837
helpers_test.go:235: (dbg) docker inspect functional-699837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	        "Created": "2025-08-04T08:46:45.45274172Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1645232,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T08:46:45.480784715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hosts",
	        "LogPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef-json.log",
	        "Name": "/functional-699837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-699837:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-699837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	                "LowerDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/merged",
	                "UpperDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/diff",
	                "WorkDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-699837",
	                "Source": "/var/lib/docker/volumes/functional-699837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-699837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-699837",
	                "name.minikube.sigs.k8s.io": "functional-699837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "28a81d3856c88da8c1d30d5c1cccd74ba2a899c3397b78caf0ac9da484142038",
	            "SandboxKey": "/var/run/docker/netns/28a81d3856c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-699837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:c5:9a:18:f2:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "763070d9e7bba0803db69bf71eb608d56921d0bfd4c71a1d39d0701f7372b87c",
	                    "EndpointID": "83493e8c17b59326d8c479c2c0d7a5ded2cae3362a881c1ce8347b3f751ead15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-699837",
	                        "c369b96e23d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837: exit status 2 (267.607569ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 logs -n 25
helpers_test.go:252: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                            ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons         │ functional-699837 addons list                                                                                                                              │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ addons         │ functional-699837 addons list -o json                                                                                                                      │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh            │ functional-699837 ssh echo hello                                                                                                                           │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh            │ functional-699837 ssh cat /etc/hostname                                                                                                                    │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image          │ functional-699837 image load --daemon kicbase/echo-server:functional-699837 --alsologtostderr                                                              │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image          │ functional-699837 image ls                                                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ tunnel         │ functional-699837 tunnel --alsologtostderr                                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ tunnel         │ functional-699837 tunnel --alsologtostderr                                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ image          │ functional-699837 image save kicbase/echo-server:functional-699837 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ tunnel         │ functional-699837 tunnel --alsologtostderr                                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ image          │ functional-699837 image rm kicbase/echo-server:functional-699837 --alsologtostderr                                                                         │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image          │ functional-699837 image ls                                                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image          │ functional-699837 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr                                       │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image          │ functional-699837 image ls                                                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image          │ functional-699837 image save --daemon kicbase/echo-server:functional-699837 --alsologtostderr                                                              │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ update-context │ functional-699837 update-context --alsologtostderr -v=2                                                                                                    │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ update-context │ functional-699837 update-context --alsologtostderr -v=2                                                                                                    │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ update-context │ functional-699837 update-context --alsologtostderr -v=2                                                                                                    │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image          │ functional-699837 image ls --format short --alsologtostderr                                                                                                │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image          │ functional-699837 image ls --format yaml --alsologtostderr                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh            │ functional-699837 ssh pgrep buildkitd                                                                                                                      │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ image          │ functional-699837 image ls --format table --alsologtostderr                                                                                                │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image          │ functional-699837 image build -t localhost/my-image:functional-699837 testdata/build --alsologtostderr                                                     │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image          │ functional-699837 image ls --format json --alsologtostderr                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image          │ functional-699837 image ls                                                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 09:14:12
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 09:14:12.992327 1684525 out.go:345] Setting OutFile to fd 1 ...
	I0804 09:14:12.992632 1684525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:14:12.992647 1684525 out.go:358] Setting ErrFile to fd 2...
	I0804 09:14:12.992653 1684525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:14:12.992985 1684525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 09:14:12.993729 1684525 out.go:352] Setting JSON to false
	I0804 09:14:12.995013 1684525 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":150942,"bootTime":1754147911,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 09:14:12.995107 1684525 start.go:140] virtualization: kvm guest
	I0804 09:14:12.997234 1684525 out.go:177] * [functional-699837] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 09:14:12.998435 1684525 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 09:14:12.998495 1684525 notify.go:220] Checking for updates...
	I0804 09:14:13.000523 1684525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 09:14:13.001833 1684525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 09:14:13.003094 1684525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 09:14:13.004247 1684525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 09:14:13.005485 1684525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 09:14:13.006929 1684525 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:14:13.007672 1684525 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 09:14:13.037008 1684525 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 09:14:13.037170 1684525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:14:13.108391 1684525 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:58 SystemTime:2025-08-04 09:14:13.099283492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:14:13.108492 1684525 docker.go:318] overlay module found
	I0804 09:14:13.109830 1684525 out.go:177] * Using the docker driver based on existing profile
	I0804 09:14:13.110806 1684525 start.go:304] selected driver: docker
	I0804 09:14:13.110821 1684525 start.go:918] validating driver "docker" against &{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:14:13.110918 1684525 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 09:14:13.111010 1684525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:14:13.174998 1684525 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:57 SystemTime:2025-08-04 09:14:13.163491877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:14:13.175928 1684525 cni.go:84] Creating CNI manager for ""
	I0804 09:14:13.176003 1684525 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:14:13.176058 1684525 start.go:348] cluster config:
	{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:14:13.178622 1684525 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d4d4b2be5907ada8d86373ea4112563c2759616d61b4a3818a35c5e172d53a14/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c3e3744dc769f21f2dd24654e1beecb6bfea7f8fdbb934aece5c0de776222793/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b2655ec5482c692bf93620fb4f296ae1f6e6322e8ac4d9bc5b6eb4deb7959758/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3a21deea3bd6d0ed2e1f870c1f36ae32ec63d20d02b5d6f7c0acfdbaa8f8b941/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 dockerd[11071]: time="2025-08-04T09:10:03.575810667Z" level=info msg="ignoring event" container=b425fd9606261cc933d38c743338a7166df00b74150ec90a06efaa88ed8fc7b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:04 functional-699837 dockerd[11071]: time="2025-08-04T09:10:04.004048987Z" level=info msg="ignoring event" container=6405868ef96be39062f80dc7747b60785a54bddc511237239054e6857dfb60f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:25 functional-699837 dockerd[11071]: time="2025-08-04T09:10:25.604145123Z" level=info msg="ignoring event" container=fa805a11775898f3d55fe7aac1621ef34f65e4c5d265b91d14f1aac398eb73e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:25 functional-699837 dockerd[11071]: time="2025-08-04T09:10:25.760949608Z" level=info msg="ignoring event" container=f96509d0b4a5c44670e00704a788094c91d7b771e339e28bcbb4c72c5b3337f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:46 functional-699837 dockerd[11071]: time="2025-08-04T09:10:46.592786531Z" level=info msg="ignoring event" container=f4baa19e4e176c92972f5c522b74a59ccb787659ec18793a2507e5f3eb51c18e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:47 functional-699837 dockerd[11071]: time="2025-08-04T09:10:47.616507681Z" level=info msg="ignoring event" container=25c1c03e2a156d302903662e106257ad86e1a932fc60405f41533a9012305264 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:55 functional-699837 dockerd[11071]: time="2025-08-04T09:10:55.761109664Z" level=info msg="ignoring event" container=c26a4a47aeb6e114017bda7b18b81d29e691be9cb646b2d0563767522b4243e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:59 functional-699837 dockerd[11071]: time="2025-08-04T09:10:59.048340949Z" level=info msg="ignoring event" container=5782c2a66cdd131809b7afdb2a669ecdc6104e397476ab6668c189dd853d9135 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:23 functional-699837 dockerd[11071]: time="2025-08-04T09:11:23.680443620Z" level=info msg="ignoring event" container=8b79556a690891c36a658f03ea970153fdb49c95eddd24f9241c3648decbc9ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:34 functional-699837 dockerd[11071]: time="2025-08-04T09:11:34.704315507Z" level=info msg="ignoring event" container=b2c8622eb896520d559e06ff8656f4690c8183e99d4c298a76889fb2e1f0ebf7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:41 functional-699837 dockerd[11071]: time="2025-08-04T09:11:41.762186466Z" level=info msg="ignoring event" container=bc29e58366f3b736cc21b6d0cc45970040b105936cf9045300d75e3e3fc5a723 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:12:15 functional-699837 dockerd[11071]: time="2025-08-04T09:12:15.453114207Z" level=info msg="ignoring event" container=9fa5f5eeba93beb44bb9b23ec48553aaea94d0f30b5d2c53f2f15b77b1d7977c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:12:26 functional-699837 dockerd[11071]: time="2025-08-04T09:12:26.472269528Z" level=info msg="ignoring event" container=91a0d13be39f38898491d381b24367c6e8aed57bbdcaf093ac956972d4c853ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:07 functional-699837 dockerd[11071]: time="2025-08-04T09:13:07.763715484Z" level=info msg="ignoring event" container=0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:21 functional-699837 dockerd[11071]: time="2025-08-04T09:13:21.094277794Z" level=info msg="ignoring event" container=c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:29 functional-699837 dockerd[11071]: time="2025-08-04T09:13:29.764267638Z" level=info msg="ignoring event" container=19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:14:19 functional-699837 dockerd[11071]: 2025/08/04 09:14:19 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	Aug 04 09:15:07 functional-699837 dockerd[11071]: time="2025-08-04T09:15:07.281763099Z" level=info msg="ignoring event" container=3ad3dffaa8cfc9e2fcd05d100df465c67df12ca8c184354f6480acc72bbe8060 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:15:18 functional-699837 dockerd[11071]: time="2025-08-04T09:15:18.303957549Z" level=info msg="ignoring event" container=fd788fce0e531f0cc31311b0b039dcda8b0e941357807e24ed03d2fd5fbeb6ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:15:48 functional-699837 dockerd[11071]: time="2025-08-04T09:15:48.765000074Z" level=info msg="ignoring event" container=628a6f8858c995fb4ab77d4ff5329f85ba4682ad80315552aa0b35c6eebaaa92 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:18:15 functional-699837 dockerd[11071]: time="2025-08-04T09:18:15.234086645Z" level=info msg="ignoring event" container=b7967a121033f384d5a33dcf4c5c1a50524e59612ebeb491aae5d422b2bb9f8f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6589bf66b9a78       9ad783615e1bc       18 seconds ago      Running             kube-controller-manager   6                   b2655ec5482c6       kube-controller-manager-functional-699837
	b7967a121033f       d85eea91cc41d       22 seconds ago      Exited              kube-apiserver            6                   c3e3744dc769f       kube-apiserver-functional-699837
	628a6f8858c99       1e30c0b1e9b99       2 minutes ago       Exited              etcd                      6                   d4d4b2be5907a       etcd-functional-699837
	fd788fce0e531       9ad783615e1bc       3 minutes ago       Exited              kube-controller-manager   5                   b2655ec5482c6       kube-controller-manager-functional-699837
	ab71ff54628ca       21d34a2aeacf5       8 minutes ago       Running             kube-scheduler            0                   3a21deea3bd6d       kube-scheduler-functional-699837
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:18:16.270682   28668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:18:16.271239   28668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:18:16.272829   28668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:18:16.273300   28668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:18:16.274864   28668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000488] IPv4: martian source 10.244.0.33 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[  +0.000590] IPv4: martian source 10.244.0.33 from 10.244.0.7, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ee 17 d6 72 58 d4 08 06
	[ +20.425373] IPv4: martian source 10.244.0.36 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 2e 04 ae c5 a3 08 06
	[  +0.708699] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[Aug 4 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 4d a6 d6 4c 9f 08 06
	[Aug 4 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 38 7f 58 31 63 08 06
	[ +30.193533] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 b7 61 9c 47 84 08 06
	[Aug 4 08:45] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a d0 26 e8 7c d1 08 06
	[Aug 4 08:46] FS-Cache: Duplicate cookie detected
	[  +0.004807] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006832] FS-Cache: O-cookie d=000000003739c6e4{9P.session} n=000000001b482ea5
	[  +0.007607] FS-Cache: O-key=[10] '34333332323039333239'
	[  +0.005436] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006682] FS-Cache: N-cookie d=000000003739c6e4{9P.session} n=00000000e0b3994b
	[  +0.007609] FS-Cache: N-key=[10] '34333332323039333239'
	[  +5.882110] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 55 4a ac 47 cd 08 06
	
	
	==> etcd [628a6f8858c9] <==
	flag provided but not defined: -proxy-refresh-interval
	Usage:
	
	  etcd [flags]
	    Start an etcd server.
	
	  etcd --version
	    Show the version of etcd.
	
	  etcd -h | --help
	    Show the help information about etcd.
	
	  etcd --config-file
	    Path to the server configuration file. Note that if a configuration file is provided, other command line flags and environment variables will be ignored.
	
	  etcd gateway
	    Run the stateless pass-through etcd TCP connection forwarding proxy.
	
	  etcd grpc-proxy
	    Run the stateless etcd v3 gRPC L7 reverse proxy.
	
	
	
	==> kernel <==
	 09:18:16 up 1 day, 17:59,  0 users,  load average: 0.06, 0.12, 0.21
	Linux functional-699837 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [3ad3dffaa8cf] <==
	command /bin/bash -c "docker logs --tail 25 3ad3dffaa8cf" failed with error: /bin/bash -c "docker logs --tail 25 3ad3dffaa8cf": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 3ad3dffaa8cf
	
	
	==> kube-apiserver [b7967a121033] <==
	W0804 09:17:55.203744       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:17:55.203749       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 09:17:55.204956       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 09:17:55.212973       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 09:17:55.217692       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceAutoProvision,NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 09:17:55.217711       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 09:17:55.217937       1 instance.go:232] Using reconciler: lease
	W0804 09:17:55.218713       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0804 09:17:55.218852       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:17:56.205267       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:17:56.205279       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:17:56.219900       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:17:57.688018       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:17:57.689386       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:17:57.946817       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:18:00.324598       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:18:00.424537       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:18:00.741845       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:18:04.925100       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:18:05.056997       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:18:05.650990       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:18:11.241173       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:18:11.254541       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:18:11.685808       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 09:18:15.218753       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [6589bf66b9a7] <==
	I0804 09:17:59.756181       1 serving.go:386] Generated self-signed cert in-memory
	I0804 09:18:00.082535       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 09:18:00.082559       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 09:18:00.083843       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 09:18:00.083923       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 09:18:00.084254       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 09:18:00.084354       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [fd788fce0e53] <==
	I0804 09:14:59.172955       1 serving.go:386] Generated self-signed cert in-memory
	I0804 09:14:59.995989       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 09:14:59.996014       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 09:14:59.997325       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 09:14:59.997329       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 09:14:59.997584       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 09:14:59.997678       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 09:15:18.272179       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-scheduler [ab71ff54628c] <==
	E0804 09:17:12.017275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 09:17:12.381034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:17:13.614977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:17:16.943542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 09:17:20.288191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:17:22.420162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:17:22.635437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 09:17:25.313601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:17:36.038343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 09:17:36.765149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:17:43.098337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 09:17:45.308834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:17:46.745323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 09:17:48.715506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:17:53.215054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:17:53.229906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 09:17:54.017993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 09:17:54.425075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:17:54.741767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:18:08.608165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:18:09.449638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:18:11.282604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:18:12.863465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 09:18:16.224577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:56832->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 09:18:16.224600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:56820->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	
	
	==> kubelet <==
	Aug 04 09:17:58 functional-699837 kubelet[23032]: E0804 09:17:58.975282   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:18:00 functional-699837 kubelet[23032]: E0804 09:18:00.031684   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:18:02 functional-699837 kubelet[23032]: E0804 09:18:02.711952   23032 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	Aug 04 09:18:05 functional-699837 kubelet[23032]: E0804 09:18:05.644187   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:18:05 functional-699837 kubelet[23032]: I0804 09:18:05.644261   23032 scope.go:117] "RemoveContainer" containerID="628a6f8858c995fb4ab77d4ff5329f85ba4682ad80315552aa0b35c6eebaaa92"
	Aug 04 09:18:05 functional-699837 kubelet[23032]: E0804 09:18:05.644396   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=etcd pod=etcd-functional-699837_kube-system(33b890b5c0b95f8eaa124c566a17ff33)\"" pod="kube-system/etcd-functional-699837" podUID="33b890b5c0b95f8eaa124c566a17ff33"
	Aug 04 09:18:05 functional-699837 kubelet[23032]: E0804 09:18:05.998916   23032 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Aug 04 09:18:06 functional-699837 kubelet[23032]: E0804 09:18:06.295783   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Aug 04 09:18:06 functional-699837 kubelet[23032]: E0804 09:18:06.535132   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:18:08 functional-699837 kubelet[23032]: E0804 09:18:08.689844   23032 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": net/http: TLS handshake timeout" node="functional-699837"
	Aug 04 09:18:12 functional-699837 kubelet[23032]: E0804 09:18:12.712826   23032 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	Aug 04 09:18:14 functional-699837 kubelet[23032]: E0804 09:18:14.187553   23032 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{functional-699837.18588548d24492b0  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-699837 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 09:10:02.672657072 +0000 UTC m=+0.898605707,LastTimestamp:2025-08-04 09:10:02.672657072 +0000 UTC m=+0.898605707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:18:14 functional-699837 kubelet[23032]: E0804 09:18:14.187671   23032 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{functional-699837.18588548d24492b0  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-699837 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 09:10:02.672657072 +0000 UTC m=+0.898605707,LastTimestamp:2025-08-04 09:10:02.672657072 +0000 UTC m=+0.898605707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:18:15 functional-699837 kubelet[23032]: E0804 09:18:15.223339   23032 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": read tcp 192.168.49.2:39070->192.168.49.2:8441: read: connection reset by peer" event="&Event{ObjectMeta:{functional-699837.18588548d244a4c8  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-699837 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 09:10:02.672661704 +0000 UTC m=+0.898610340,LastTimestamp:2025-08-04 09:10:02.672661704 +0000 UTC m=+0.898610340,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:funct
ional-699837,}"
	Aug 04 09:18:15 functional-699837 kubelet[23032]: I0804 09:18:15.690876   23032 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:18:15 functional-699837 kubelet[23032]: E0804 09:18:15.691238   23032 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:18:16 functional-699837 kubelet[23032]: I0804 09:18:16.057454   23032 scope.go:117] "RemoveContainer" containerID="3ad3dffaa8cfc9e2fcd05d100df465c67df12ca8c184354f6480acc72bbe8060"
	Aug 04 09:18:16 functional-699837 kubelet[23032]: E0804 09:18:16.058517   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:18:16 functional-699837 kubelet[23032]: I0804 09:18:16.058617   23032 scope.go:117] "RemoveContainer" containerID="b7967a121033f384d5a33dcf4c5c1a50524e59612ebeb491aae5d422b2bb9f8f"
	Aug 04 09:18:16 functional-699837 kubelet[23032]: E0804 09:18:16.058795   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-functional-699837_kube-system(cc94200f18453b93e8d420d475923a00)\"" pod="kube-system/kube-apiserver-functional-699837" podUID="cc94200f18453b93e8d420d475923a00"
	Aug 04 09:18:16 functional-699837 kubelet[23032]: E0804 09:18:16.203556   23032 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588548d244a4c8  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-699837 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 09:10:02.672661704 +0000 UTC m=+0.898610340,LastTimestamp:2025-08-04 09:10:02.672661704 +0000 UTC m=+0.898610340,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:18:16 functional-699837 kubelet[23032]: E0804 09:18:16.223988   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:39058->192.168.49.2:8441: read: connection reset by peer" interval="7s"
	Aug 04 09:18:16 functional-699837 kubelet[23032]: E0804 09:18:16.224061   23032 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:56858->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Aug 04 09:18:16 functional-699837 kubelet[23032]: E0804 09:18:16.224107   23032 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:39068->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Aug 04 09:18:16 functional-699837 kubelet[23032]: E0804 09:18:16.224027   23032 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-699837&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:39076->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837: exit status 2 (263.191642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-699837" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PersistentVolumeClaim (241.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MySQL (1.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-699837 replace --force -f testdata/mysql.yaml
functional_test.go:1810: (dbg) Non-zero exit: kubectl --context functional-699837 replace --force -f testdata/mysql.yaml: exit status 1 (52.210679ms)

                                                
                                                
** stderr ** 
	error when deleting "testdata/mysql.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/services/mysql": dial tcp 192.168.49.2:8441: connect: connection refused
	error when deleting "testdata/mysql.yaml": Delete "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments/mysql": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1812: failed to kubectl replace mysql: args "kubectl --context functional-699837 replace --force -f testdata/mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-699837
helpers_test.go:235: (dbg) docker inspect functional-699837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	        "Created": "2025-08-04T08:46:45.45274172Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1645232,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T08:46:45.480784715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hosts",
	        "LogPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef-json.log",
	        "Name": "/functional-699837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-699837:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-699837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	                "LowerDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/merged",
	                "UpperDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/diff",
	                "WorkDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-699837",
	                "Source": "/var/lib/docker/volumes/functional-699837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-699837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-699837",
	                "name.minikube.sigs.k8s.io": "functional-699837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "28a81d3856c88da8c1d30d5c1cccd74ba2a899c3397b78caf0ac9da484142038",
	            "SandboxKey": "/var/run/docker/netns/28a81d3856c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-699837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:c5:9a:18:f2:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "763070d9e7bba0803db69bf71eb608d56921d0bfd4c71a1d39d0701f7372b87c",
	                    "EndpointID": "83493e8c17b59326d8c479c2c0d7a5ded2cae3362a881c1ce8347b3f751ead15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-699837",
	                        "c369b96e23d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837: exit status 2 (280.056696ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 logs -n 25
helpers_test.go:252: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                                            ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image      │ functional-699837 image ls                                                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh        │ functional-699837 ssh findmnt -T /mount2                                                                                                                   │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh        │ functional-699837 ssh sudo cat /usr/share/ca-certificates/1582690.pem                                                                                      │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh        │ functional-699837 ssh findmnt -T /mount3                                                                                                                   │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh        │ functional-699837 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                   │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ mount      │ -p functional-699837 --kill=true                                                                                                                           │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ docker-env │ functional-699837 docker-env                                                                                                                               │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh        │ functional-699837 ssh sudo cat /etc/ssl/certs/15826902.pem                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh        │ functional-699837 ssh sudo cat /usr/share/ca-certificates/15826902.pem                                                                                     │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh        │ functional-699837 ssh sudo cat /etc/test/nested/copy/1582690/hosts                                                                                         │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh        │ functional-699837 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                   │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ addons     │ functional-699837 addons list                                                                                                                              │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ addons     │ functional-699837 addons list -o json                                                                                                                      │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh        │ functional-699837 ssh echo hello                                                                                                                           │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh        │ functional-699837 ssh cat /etc/hostname                                                                                                                    │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image      │ functional-699837 image load --daemon kicbase/echo-server:functional-699837 --alsologtostderr                                                              │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image      │ functional-699837 image ls                                                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ tunnel     │ functional-699837 tunnel --alsologtostderr                                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ tunnel     │ functional-699837 tunnel --alsologtostderr                                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ image      │ functional-699837 image save kicbase/echo-server:functional-699837 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ tunnel     │ functional-699837 tunnel --alsologtostderr                                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ image      │ functional-699837 image rm kicbase/echo-server:functional-699837 --alsologtostderr                                                                         │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image      │ functional-699837 image ls                                                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image      │ functional-699837 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr                                       │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ image      │ functional-699837 image ls                                                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	└────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 09:14:12
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 09:14:12.992327 1684525 out.go:345] Setting OutFile to fd 1 ...
	I0804 09:14:12.992632 1684525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:14:12.992647 1684525 out.go:358] Setting ErrFile to fd 2...
	I0804 09:14:12.992653 1684525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:14:12.992985 1684525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 09:14:12.993729 1684525 out.go:352] Setting JSON to false
	I0804 09:14:12.995013 1684525 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":150942,"bootTime":1754147911,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 09:14:12.995107 1684525 start.go:140] virtualization: kvm guest
	I0804 09:14:12.997234 1684525 out.go:177] * [functional-699837] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 09:14:12.998435 1684525 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 09:14:12.998495 1684525 notify.go:220] Checking for updates...
	I0804 09:14:13.000523 1684525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 09:14:13.001833 1684525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 09:14:13.003094 1684525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 09:14:13.004247 1684525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 09:14:13.005485 1684525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 09:14:13.006929 1684525 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:14:13.007672 1684525 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 09:14:13.037008 1684525 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 09:14:13.037170 1684525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:14:13.108391 1684525 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:58 SystemTime:2025-08-04 09:14:13.099283492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:14:13.108492 1684525 docker.go:318] overlay module found
	I0804 09:14:13.109830 1684525 out.go:177] * Using the docker driver based on existing profile
	I0804 09:14:13.110806 1684525 start.go:304] selected driver: docker
	I0804 09:14:13.110821 1684525 start.go:918] validating driver "docker" against &{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:14:13.110918 1684525 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 09:14:13.111010 1684525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:14:13.174998 1684525 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:57 SystemTime:2025-08-04 09:14:13.163491877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:14:13.175928 1684525 cni.go:84] Creating CNI manager for ""
	I0804 09:14:13.176003 1684525 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:14:13.176058 1684525 start.go:348] cluster config:
	{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:14:13.178622 1684525 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.787995733Z" level=info msg="ignoring event" container=f1bd416cdc841c08268e4a5cc39ad5a59cc0a90b637768c23bba55fc61dfe5c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.834457529Z" level=info msg="ignoring event" container=e5c110c6a30cdc8999b8b044af4d1ddbb8d18f91cb064a1ebe54d22157751829 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.885743027Z" level=info msg="ignoring event" container=e13433a1e498749e89b61d95e4e808ac592ff0f1590fa6a6796cb547fa62b353 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.942900152Z" level=info msg="ignoring event" container=0dbe96ba02a76e8c83b519e0f5e45430250b1274660db94c7535b17780b8b6a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.996443176Z" level=info msg="ignoring event" container=65a02a714ffa74a76d877f2f692a10085ec7c8de0a017440b9efab00ad27e971 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d4d4b2be5907ada8d86373ea4112563c2759616d61b4a3818a35c5e172d53a14/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c3e3744dc769f21f2dd24654e1beecb6bfea7f8fdbb934aece5c0de776222793/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b2655ec5482c692bf93620fb4f296ae1f6e6322e8ac4d9bc5b6eb4deb7959758/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3a21deea3bd6d0ed2e1f870c1f36ae32ec63d20d02b5d6f7c0acfdbaa8f8b941/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 dockerd[11071]: time="2025-08-04T09:10:03.575810667Z" level=info msg="ignoring event" container=b425fd9606261cc933d38c743338a7166df00b74150ec90a06efaa88ed8fc7b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:04 functional-699837 dockerd[11071]: time="2025-08-04T09:10:04.004048987Z" level=info msg="ignoring event" container=6405868ef96be39062f80dc7747b60785a54bddc511237239054e6857dfb60f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:25 functional-699837 dockerd[11071]: time="2025-08-04T09:10:25.604145123Z" level=info msg="ignoring event" container=fa805a11775898f3d55fe7aac1621ef34f65e4c5d265b91d14f1aac398eb73e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:25 functional-699837 dockerd[11071]: time="2025-08-04T09:10:25.760949608Z" level=info msg="ignoring event" container=f96509d0b4a5c44670e00704a788094c91d7b771e339e28bcbb4c72c5b3337f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:46 functional-699837 dockerd[11071]: time="2025-08-04T09:10:46.592786531Z" level=info msg="ignoring event" container=f4baa19e4e176c92972f5c522b74a59ccb787659ec18793a2507e5f3eb51c18e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:47 functional-699837 dockerd[11071]: time="2025-08-04T09:10:47.616507681Z" level=info msg="ignoring event" container=25c1c03e2a156d302903662e106257ad86e1a932fc60405f41533a9012305264 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:55 functional-699837 dockerd[11071]: time="2025-08-04T09:10:55.761109664Z" level=info msg="ignoring event" container=c26a4a47aeb6e114017bda7b18b81d29e691be9cb646b2d0563767522b4243e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:59 functional-699837 dockerd[11071]: time="2025-08-04T09:10:59.048340949Z" level=info msg="ignoring event" container=5782c2a66cdd131809b7afdb2a669ecdc6104e397476ab6668c189dd853d9135 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:23 functional-699837 dockerd[11071]: time="2025-08-04T09:11:23.680443620Z" level=info msg="ignoring event" container=8b79556a690891c36a658f03ea970153fdb49c95eddd24f9241c3648decbc9ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:34 functional-699837 dockerd[11071]: time="2025-08-04T09:11:34.704315507Z" level=info msg="ignoring event" container=b2c8622eb896520d559e06ff8656f4690c8183e99d4c298a76889fb2e1f0ebf7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:41 functional-699837 dockerd[11071]: time="2025-08-04T09:11:41.762186466Z" level=info msg="ignoring event" container=bc29e58366f3b736cc21b6d0cc45970040b105936cf9045300d75e3e3fc5a723 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:12:15 functional-699837 dockerd[11071]: time="2025-08-04T09:12:15.453114207Z" level=info msg="ignoring event" container=9fa5f5eeba93beb44bb9b23ec48553aaea94d0f30b5d2c53f2f15b77b1d7977c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:12:26 functional-699837 dockerd[11071]: time="2025-08-04T09:12:26.472269528Z" level=info msg="ignoring event" container=91a0d13be39f38898491d381b24367c6e8aed57bbdcaf093ac956972d4c853ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:07 functional-699837 dockerd[11071]: time="2025-08-04T09:13:07.763715484Z" level=info msg="ignoring event" container=0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:21 functional-699837 dockerd[11071]: time="2025-08-04T09:13:21.094277794Z" level=info msg="ignoring event" container=c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:29 functional-699837 dockerd[11071]: time="2025-08-04T09:13:29.764267638Z" level=info msg="ignoring event" container=19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	19b815a4b1b28       9ad783615e1bc       About a minute ago   Exited              kube-controller-manager   4                   b2655ec5482c6       kube-controller-manager-functional-699837
	0e5a036fd8651       1e30c0b1e9b99       About a minute ago   Exited              etcd                      5                   d4d4b2be5907a       etcd-functional-699837
	c9537e09fe59d       d85eea91cc41d       About a minute ago   Exited              kube-apiserver            4                   c3e3744dc769f       kube-apiserver-functional-699837
	ab71ff54628ca       21d34a2aeacf5       4 minutes ago        Running             kube-scheduler            0                   3a21deea3bd6d       kube-scheduler-functional-699837
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:14:18.309823   27500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:18.310454   27500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:18.311603   27500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:18.312099   27500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:18.313690   27500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000488] IPv4: martian source 10.244.0.33 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[  +0.000590] IPv4: martian source 10.244.0.33 from 10.244.0.7, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ee 17 d6 72 58 d4 08 06
	[ +20.425373] IPv4: martian source 10.244.0.36 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 2e 04 ae c5 a3 08 06
	[  +0.708699] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[Aug 4 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 4d a6 d6 4c 9f 08 06
	[Aug 4 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 38 7f 58 31 63 08 06
	[ +30.193533] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 b7 61 9c 47 84 08 06
	[Aug 4 08:45] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a d0 26 e8 7c d1 08 06
	[Aug 4 08:46] FS-Cache: Duplicate cookie detected
	[  +0.004807] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006832] FS-Cache: O-cookie d=000000003739c6e4{9P.session} n=000000001b482ea5
	[  +0.007607] FS-Cache: O-key=[10] '34333332323039333239'
	[  +0.005436] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006682] FS-Cache: N-cookie d=000000003739c6e4{9P.session} n=00000000e0b3994b
	[  +0.007609] FS-Cache: N-key=[10] '34333332323039333239'
	[  +5.882110] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 55 4a ac 47 cd 08 06
	
	
	==> etcd [0e5a036fd865] <==
	flag provided but not defined: -proxy-refresh-interval
	Usage:
	
	  etcd [flags]
	    Start an etcd server.
	
	  etcd --version
	    Show the version of etcd.
	
	  etcd -h | --help
	    Show the help information about etcd.
	
	  etcd --config-file
	    Path to the server configuration file. Note that if a configuration file is provided, other command line flags and environment variables will be ignored.
	
	  etcd gateway
	    Run the stateless pass-through etcd TCP connection forwarding proxy.
	
	  etcd grpc-proxy
	    Run the stateless etcd v3 gRPC L7 reverse proxy.
	
	
	
	==> kernel <==
	 09:14:18 up 1 day, 17:55,  0 users,  load average: 0.54, 0.20, 0.26
	Linux functional-699837 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [c9537e09fe59] <==
	W0804 09:13:01.062968       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0804 09:13:01.063080       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 09:13:01.064364       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 09:13:01.072243       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 09:13:01.077057       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceAutoProvision,NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 09:13:01.077076       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 09:13:01.077355       1 instance.go:232] Using reconciler: lease
	W0804 09:13:01.078152       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0804 09:13:01.078183       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.064385       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.064386       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.079065       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.556302       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.764969       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.836811       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:05.764628       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:06.271423       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:06.558313       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:09.120366       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:10.991226       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:11.100603       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:15.082522       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:16.616538       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:18.138507       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 09:13:21.078676       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [19b815a4b1b2] <==
	I0804 09:13:09.096379       1 serving.go:386] Generated self-signed cert in-memory
	I0804 09:13:09.725784       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 09:13:09.725823       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 09:13:09.727763       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 09:13:09.727831       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 09:13:09.728078       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 09:13:09.728188       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 09:13:29.730720       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-scheduler [ab71ff54628c] <==
	E0804 09:13:15.121795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:13:17.161677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:13:18.600381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43972->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:44002->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43978->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43960->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:13:22.083981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:44032->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:13:22.084172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43986->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:13:22.585066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 09:13:26.210416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 09:13:27.295821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 09:13:34.688522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 09:13:37.031049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:13:45.713447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:13:49.362723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 09:13:54.296326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:13:55.421665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:13:56.863265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:13:57.488174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:13:59.236047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:14:03.694972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 09:14:07.280269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:14:08.128547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:14:10.109602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	
	
	==> kubelet <==
	Aug 04 09:14:03 functional-699837 kubelet[23032]: E0804 09:14:03.644142   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: I0804 09:14:03.644222   23032 scope.go:117] "RemoveContainer" containerID="19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: E0804 09:14:03.644365   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-699837_kube-system(ed0b2fd0bf6ad62500e8494ab79d1a1a)\"" pod="kube-system/kube-controller-manager-functional-699837" podUID="ed0b2fd0bf6ad62500e8494ab79d1a1a"
	Aug 04 09:14:04 functional-699837 kubelet[23032]: I0804 09:14:04.635636   23032 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:14:04 functional-699837 kubelet[23032]: E0804 09:14:04.636090   23032 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:14:05 functional-699837 kubelet[23032]: E0804 09:14:05.350524   23032 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588548cf9cd04c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,LastTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:14:05 functional-699837 kubelet[23032]: E0804 09:14:05.610218   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:14:08 functional-699837 kubelet[23032]: E0804 09:14:08.644074   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:08 functional-699837 kubelet[23032]: I0804 09:14:08.644186   23032 scope.go:117] "RemoveContainer" containerID="0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1"
	Aug 04 09:14:08 functional-699837 kubelet[23032]: E0804 09:14:08.644380   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-functional-699837_kube-system(33b890b5c0b95f8eaa124c566a17ff33)\"" pod="kube-system/etcd-functional-699837" podUID="33b890b5c0b95f8eaa124c566a17ff33"
	Aug 04 09:14:10 functional-699837 kubelet[23032]: E0804 09:14:10.643561   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:10 functional-699837 kubelet[23032]: I0804 09:14:10.643671   23032 scope.go:117] "RemoveContainer" containerID="c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e"
	Aug 04 09:14:10 functional-699837 kubelet[23032]: E0804 09:14:10.643844   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-699837_kube-system(cc94200f18453b93e8d420d475923a00)\"" pod="kube-system/kube-apiserver-functional-699837" podUID="cc94200f18453b93e8d420d475923a00"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: E0804 09:14:11.218396   23032 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: I0804 09:14:11.637647   23032 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: E0804 09:14:11.638029   23032 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: E0804 09:14:11.997440   23032 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Aug 04 09:14:12 functional-699837 kubelet[23032]: E0804 09:14:12.610748   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:14:12 functional-699837 kubelet[23032]: E0804 09:14:12.694152   23032 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	Aug 04 09:14:14 functional-699837 kubelet[23032]: E0804 09:14:14.644071   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:14 functional-699837 kubelet[23032]: I0804 09:14:14.644181   23032 scope.go:117] "RemoveContainer" containerID="19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec"
	Aug 04 09:14:14 functional-699837 kubelet[23032]: E0804 09:14:14.644371   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-699837_kube-system(ed0b2fd0bf6ad62500e8494ab79d1a1a)\"" pod="kube-system/kube-controller-manager-functional-699837" podUID="ed0b2fd0bf6ad62500e8494ab79d1a1a"
	Aug 04 09:14:15 functional-699837 kubelet[23032]: E0804 09:14:15.238542   23032 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Aug 04 09:14:15 functional-699837 kubelet[23032]: E0804 09:14:15.351513   23032 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588548cf9cd04c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,LastTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:14:16 functional-699837 kubelet[23032]: E0804 09:14:16.383674   23032 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-699837&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837: exit status 2 (268.518395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-699837" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MySQL (1.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/NodeLabels (1.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-699837 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:236: (dbg) Non-zero exit: kubectl --context functional-699837 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (59.075157ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:238: failed to 'kubectl get nodes' with args "kubectl --context functional-699837 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:244: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:244: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:244: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:244: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:244: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-699837
helpers_test.go:235: (dbg) docker inspect functional-699837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	        "Created": "2025-08-04T08:46:45.45274172Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1645232,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T08:46:45.480784715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/hosts",
	        "LogPath": "/var/lib/docker/containers/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef/c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef-json.log",
	        "Name": "/functional-699837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-699837:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-699837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c369b96e23d5b41fbe502377870d491580cb85c5215f8441347e14f0e4bc37ef",
	                "LowerDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/merged",
	                "UpperDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/diff",
	                "WorkDir": "/var/lib/docker/overlay2/328952bd765245f57c2eaa05b0bd7cdbe686ae38a32f149eefbc775cdfc03252/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-699837",
	                "Source": "/var/lib/docker/volumes/functional-699837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-699837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-699837",
	                "name.minikube.sigs.k8s.io": "functional-699837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "28a81d3856c88da8c1d30d5c1cccd74ba2a899c3397b78caf0ac9da484142038",
	            "SandboxKey": "/var/run/docker/netns/28a81d3856c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-699837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:c5:9a:18:f2:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "763070d9e7bba0803db69bf71eb608d56921d0bfd4c71a1d39d0701f7372b87c",
	                    "EndpointID": "83493e8c17b59326d8c479c2c0d7a5ded2cae3362a881c1ce8347b3f751ead15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-699837",
	                        "c369b96e23d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-699837 -n functional-699837: exit status 2 (332.474622ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 logs -n 25
helpers_test.go:252: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ config  │ functional-699837 config get cpus                                                                                                                                      │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ config  │ functional-699837 config unset cpus                                                                                                                                    │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh     │ functional-699837 ssh -n functional-699837 sudo cat /home/docker/cp-test.txt                                                                                           │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ config  │ functional-699837 config get cpus                                                                                                                                      │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ service │ functional-699837 service list -o json                                                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ cp      │ functional-699837 cp functional-699837:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelCpCmd4180608053/001/cp-test.txt        │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ mount   │ -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdany-port3821011156/001:/mount-9p --alsologtostderr -v=1                   │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh     │ functional-699837 ssh findmnt -T /mount-9p | grep 9p                                                                                                                   │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ service │ functional-699837 service --namespace=default --https --url hello-node                                                                                                 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh     │ functional-699837 ssh -n functional-699837 sudo cat /home/docker/cp-test.txt                                                                                           │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ service │ functional-699837 service hello-node --url --format={{.IP}}                                                                                                            │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ cp      │ functional-699837 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                              │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ service │ functional-699837 service hello-node --url                                                                                                                             │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh     │ functional-699837 ssh findmnt -T /mount-9p | grep 9p                                                                                                                   │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh     │ functional-699837 ssh -n functional-699837 sudo cat /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh     │ functional-699837 ssh sudo systemctl is-active crio                                                                                                                    │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh     │ functional-699837 ssh -- ls -la /mount-9p                                                                                                                              │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh     │ functional-699837 ssh cat /mount-9p/test-1754298848740038253                                                                                                           │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ ssh     │ functional-699837 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                                       │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh     │ functional-699837 ssh sudo umount -f /mount-9p                                                                                                                         │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │ 04 Aug 25 09:14 UTC │
	│ mount   │ -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdspecific-port2621928662/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh     │ functional-699837 ssh findmnt -T /mount-9p | grep 9p                                                                                                                   │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ start   │ -p functional-699837 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0                        │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ image   │ functional-699837 image load --daemon kicbase/echo-server:functional-699837 --alsologtostderr                                                                          │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	│ ssh     │ functional-699837 ssh findmnt -T /mount-9p | grep 9p                                                                                                                   │ functional-699837 │ jenkins │ v1.36.0 │ 04 Aug 25 09:14 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 09:14:11
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 09:14:11.715923 1683521 out.go:345] Setting OutFile to fd 1 ...
	I0804 09:14:11.716018 1683521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:14:11.716023 1683521 out.go:358] Setting ErrFile to fd 2...
	I0804 09:14:11.716027 1683521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:14:11.716326 1683521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 09:14:11.716892 1683521 out.go:352] Setting JSON to false
	I0804 09:14:11.717934 1683521 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":150941,"bootTime":1754147911,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 09:14:11.718042 1683521 start.go:140] virtualization: kvm guest
	I0804 09:14:11.719821 1683521 out.go:177] * [functional-699837] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0804 09:14:11.720835 1683521 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 09:14:11.720869 1683521 notify.go:220] Checking for updates...
	I0804 09:14:11.722767 1683521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 09:14:11.723980 1683521 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 09:14:11.724962 1683521 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 09:14:11.725977 1683521 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 09:14:11.726884 1683521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 09:14:11.728212 1683521 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:14:11.728808 1683521 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 09:14:11.754162 1683521 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 09:14:11.754269 1683521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:14:11.807011 1683521 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:58 SystemTime:2025-08-04 09:14:11.797857069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:14:11.807161 1683521 docker.go:318] overlay module found
	I0804 09:14:11.808723 1683521 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0804 09:14:11.809672 1683521 start.go:304] selected driver: docker
	I0804 09:14:11.809692 1683521 start.go:918] validating driver "docker" against &{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:14:11.809804 1683521 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 09:14:11.812046 1683521 out.go:201] 
	W0804 09:14:11.813123 1683521 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0804 09:14:11.814088 1683521 out.go:201] 
	
	
	==> Docker <==
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.787995733Z" level=info msg="ignoring event" container=f1bd416cdc841c08268e4a5cc39ad5a59cc0a90b637768c23bba55fc61dfe5c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.834457529Z" level=info msg="ignoring event" container=e5c110c6a30cdc8999b8b044af4d1ddbb8d18f91cb064a1ebe54d22157751829 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.885743027Z" level=info msg="ignoring event" container=e13433a1e498749e89b61d95e4e808ac592ff0f1590fa6a6796cb547fa62b353 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.942900152Z" level=info msg="ignoring event" container=0dbe96ba02a76e8c83b519e0f5e45430250b1274660db94c7535b17780b8b6a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:09:59 functional-699837 dockerd[11071]: time="2025-08-04T09:09:59.996443176Z" level=info msg="ignoring event" container=65a02a714ffa74a76d877f2f692a10085ec7c8de0a017440b9efab00ad27e971 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d4d4b2be5907ada8d86373ea4112563c2759616d61b4a3818a35c5e172d53a14/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c3e3744dc769f21f2dd24654e1beecb6bfea7f8fdbb934aece5c0de776222793/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b2655ec5482c692bf93620fb4f296ae1f6e6322e8ac4d9bc5b6eb4deb7959758/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 cri-dockerd[11426]: time="2025-08-04T09:10:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3a21deea3bd6d0ed2e1f870c1f36ae32ec63d20d02b5d6f7c0acfdbaa8f8b941/resolv.conf as [nameserver 192.168.49.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:10:03 functional-699837 dockerd[11071]: time="2025-08-04T09:10:03.575810667Z" level=info msg="ignoring event" container=b425fd9606261cc933d38c743338a7166df00b74150ec90a06efaa88ed8fc7b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:04 functional-699837 dockerd[11071]: time="2025-08-04T09:10:04.004048987Z" level=info msg="ignoring event" container=6405868ef96be39062f80dc7747b60785a54bddc511237239054e6857dfb60f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:25 functional-699837 dockerd[11071]: time="2025-08-04T09:10:25.604145123Z" level=info msg="ignoring event" container=fa805a11775898f3d55fe7aac1621ef34f65e4c5d265b91d14f1aac398eb73e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:25 functional-699837 dockerd[11071]: time="2025-08-04T09:10:25.760949608Z" level=info msg="ignoring event" container=f96509d0b4a5c44670e00704a788094c91d7b771e339e28bcbb4c72c5b3337f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:46 functional-699837 dockerd[11071]: time="2025-08-04T09:10:46.592786531Z" level=info msg="ignoring event" container=f4baa19e4e176c92972f5c522b74a59ccb787659ec18793a2507e5f3eb51c18e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:47 functional-699837 dockerd[11071]: time="2025-08-04T09:10:47.616507681Z" level=info msg="ignoring event" container=25c1c03e2a156d302903662e106257ad86e1a932fc60405f41533a9012305264 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:55 functional-699837 dockerd[11071]: time="2025-08-04T09:10:55.761109664Z" level=info msg="ignoring event" container=c26a4a47aeb6e114017bda7b18b81d29e691be9cb646b2d0563767522b4243e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:10:59 functional-699837 dockerd[11071]: time="2025-08-04T09:10:59.048340949Z" level=info msg="ignoring event" container=5782c2a66cdd131809b7afdb2a669ecdc6104e397476ab6668c189dd853d9135 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:23 functional-699837 dockerd[11071]: time="2025-08-04T09:11:23.680443620Z" level=info msg="ignoring event" container=8b79556a690891c36a658f03ea970153fdb49c95eddd24f9241c3648decbc9ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:34 functional-699837 dockerd[11071]: time="2025-08-04T09:11:34.704315507Z" level=info msg="ignoring event" container=b2c8622eb896520d559e06ff8656f4690c8183e99d4c298a76889fb2e1f0ebf7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:11:41 functional-699837 dockerd[11071]: time="2025-08-04T09:11:41.762186466Z" level=info msg="ignoring event" container=bc29e58366f3b736cc21b6d0cc45970040b105936cf9045300d75e3e3fc5a723 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:12:15 functional-699837 dockerd[11071]: time="2025-08-04T09:12:15.453114207Z" level=info msg="ignoring event" container=9fa5f5eeba93beb44bb9b23ec48553aaea94d0f30b5d2c53f2f15b77b1d7977c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:12:26 functional-699837 dockerd[11071]: time="2025-08-04T09:12:26.472269528Z" level=info msg="ignoring event" container=91a0d13be39f38898491d381b24367c6e8aed57bbdcaf093ac956972d4c853ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:07 functional-699837 dockerd[11071]: time="2025-08-04T09:13:07.763715484Z" level=info msg="ignoring event" container=0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:21 functional-699837 dockerd[11071]: time="2025-08-04T09:13:21.094277794Z" level=info msg="ignoring event" container=c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:13:29 functional-699837 dockerd[11071]: time="2025-08-04T09:13:29.764267638Z" level=info msg="ignoring event" container=19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	19b815a4b1b28       9ad783615e1bc       About a minute ago   Exited              kube-controller-manager   4                   b2655ec5482c6       kube-controller-manager-functional-699837
	0e5a036fd8651       1e30c0b1e9b99       About a minute ago   Exited              etcd                      5                   d4d4b2be5907a       etcd-functional-699837
	c9537e09fe59d       d85eea91cc41d       About a minute ago   Exited              kube-apiserver            4                   c3e3744dc769f       kube-apiserver-functional-699837
	ab71ff54628ca       21d34a2aeacf5       4 minutes ago        Running             kube-scheduler            0                   3a21deea3bd6d       kube-scheduler-functional-699837
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 09:14:12.904777   26414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:12.905566   26414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:12.907151   26414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:12.907572   26414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E0804 09:14:12.908768   26414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000488] IPv4: martian source 10.244.0.33 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[  +0.000590] IPv4: martian source 10.244.0.33 from 10.244.0.7, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ee 17 d6 72 58 d4 08 06
	[ +20.425373] IPv4: martian source 10.244.0.36 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 2e 04 ae c5 a3 08 06
	[  +0.708699] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 55 96 56 a6 b6 08 06
	[Aug 4 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 4d a6 d6 4c 9f 08 06
	[Aug 4 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 38 7f 58 31 63 08 06
	[ +30.193533] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 b7 61 9c 47 84 08 06
	[Aug 4 08:45] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a d0 26 e8 7c d1 08 06
	[Aug 4 08:46] FS-Cache: Duplicate cookie detected
	[  +0.004807] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006832] FS-Cache: O-cookie d=000000003739c6e4{9P.session} n=000000001b482ea5
	[  +0.007607] FS-Cache: O-key=[10] '34333332323039333239'
	[  +0.005436] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006682] FS-Cache: N-cookie d=000000003739c6e4{9P.session} n=00000000e0b3994b
	[  +0.007609] FS-Cache: N-key=[10] '34333332323039333239'
	[  +5.882110] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 55 4a ac 47 cd 08 06
	
	
	==> etcd [0e5a036fd865] <==
	flag provided but not defined: -proxy-refresh-interval
	Usage:
	
	  etcd [flags]
	    Start an etcd server.
	
	  etcd --version
	    Show the version of etcd.
	
	  etcd -h | --help
	    Show the help information about etcd.
	
	  etcd --config-file
	    Path to the server configuration file. Note that if a configuration file is provided, other command line flags and environment variables will be ignored.
	
	  etcd gateway
	    Run the stateless pass-through etcd TCP connection forwarding proxy.
	
	  etcd grpc-proxy
	    Run the stateless etcd v3 gRPC L7 reverse proxy.
	
	
	
	==> kernel <==
	 09:14:12 up 1 day, 17:55,  0 users,  load average: 0.42, 0.17, 0.25
	Linux functional-699837 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [c9537e09fe59] <==
	W0804 09:13:01.062968       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0804 09:13:01.063080       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 09:13:01.064364       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 09:13:01.072243       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 09:13:01.077057       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceAutoProvision,NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 09:13:01.077076       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 09:13:01.077355       1 instance.go:232] Using reconciler: lease
	W0804 09:13:01.078152       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0804 09:13:01.078183       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.064385       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.064386       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:02.079065       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.556302       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.764969       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:03.836811       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:05.764628       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:06.271423       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:06.558313       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:09.120366       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:10.991226       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:11.100603       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:15.082522       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:16.616538       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:13:18.138507       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 09:13:21.078676       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [19b815a4b1b2] <==
	I0804 09:13:09.096379       1 serving.go:386] Generated self-signed cert in-memory
	I0804 09:13:09.725784       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 09:13:09.725823       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 09:13:09.727763       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 09:13:09.727831       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 09:13:09.728078       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 09:13:09.728188       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 09:13:29.730720       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-scheduler [ab71ff54628c] <==
	E0804 09:13:15.121795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:13:17.161677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:13:18.600381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43972->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:44002->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43978->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:13:22.083979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43960->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:13:22.083981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:44032->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:13:22.084172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43986->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:13:22.585066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 09:13:26.210416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 09:13:27.295821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 09:13:34.688522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 09:13:37.031049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:13:45.713447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:13:49.362723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 09:13:54.296326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:13:55.421665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:13:56.863265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:13:57.488174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:13:59.236047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:14:03.694972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 09:14:07.280269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:14:08.128547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:14:10.109602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	
	
	==> kubelet <==
	Aug 04 09:13:57 functional-699837 kubelet[23032]: E0804 09:13:57.643831   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:13:57 functional-699837 kubelet[23032]: I0804 09:13:57.643903   23032 scope.go:117] "RemoveContainer" containerID="0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1"
	Aug 04 09:13:57 functional-699837 kubelet[23032]: E0804 09:13:57.644026   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-functional-699837_kube-system(33b890b5c0b95f8eaa124c566a17ff33)\"" pod="kube-system/etcd-functional-699837" podUID="33b890b5c0b95f8eaa124c566a17ff33"
	Aug 04 09:13:58 functional-699837 kubelet[23032]: E0804 09:13:58.609095   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:13:59 functional-699837 kubelet[23032]: E0804 09:13:59.432444   23032 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Aug 04 09:14:02 functional-699837 kubelet[23032]: E0804 09:14:02.693365   23032 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: E0804 09:14:03.644142   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: I0804 09:14:03.644222   23032 scope.go:117] "RemoveContainer" containerID="19b815a4b1b280da4b7de16491fdc883687c7e484c58d31bf9c06b0d634911ec"
	Aug 04 09:14:03 functional-699837 kubelet[23032]: E0804 09:14:03.644365   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-699837_kube-system(ed0b2fd0bf6ad62500e8494ab79d1a1a)\"" pod="kube-system/kube-controller-manager-functional-699837" podUID="ed0b2fd0bf6ad62500e8494ab79d1a1a"
	Aug 04 09:14:04 functional-699837 kubelet[23032]: I0804 09:14:04.635636   23032 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:14:04 functional-699837 kubelet[23032]: E0804 09:14:04.636090   23032 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:14:05 functional-699837 kubelet[23032]: E0804 09:14:05.350524   23032 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-699837.18588548cf9cd04c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-699837,UID:functional-699837,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:functional-699837,},FirstTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,LastTimestamp:2025-08-04 09:10:02.628108364 +0000 UTC m=+0.854057015,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-699837,}"
	Aug 04 09:14:05 functional-699837 kubelet[23032]: E0804 09:14:05.610218   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:14:08 functional-699837 kubelet[23032]: E0804 09:14:08.644074   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:08 functional-699837 kubelet[23032]: I0804 09:14:08.644186   23032 scope.go:117] "RemoveContainer" containerID="0e5a036fd86510c6905627abc7a8f9644900d6148dfaaa8cbd159578295980b1"
	Aug 04 09:14:08 functional-699837 kubelet[23032]: E0804 09:14:08.644380   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-functional-699837_kube-system(33b890b5c0b95f8eaa124c566a17ff33)\"" pod="kube-system/etcd-functional-699837" podUID="33b890b5c0b95f8eaa124c566a17ff33"
	Aug 04 09:14:10 functional-699837 kubelet[23032]: E0804 09:14:10.643561   23032 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-699837\" not found" node="functional-699837"
	Aug 04 09:14:10 functional-699837 kubelet[23032]: I0804 09:14:10.643671   23032 scope.go:117] "RemoveContainer" containerID="c9537e09fe59d7af5ff13dca38fcb65bc3a4d86096a58ca4f4f261c7f9fb4f6e"
	Aug 04 09:14:10 functional-699837 kubelet[23032]: E0804 09:14:10.643844   23032 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-699837_kube-system(cc94200f18453b93e8d420d475923a00)\"" pod="kube-system/kube-apiserver-functional-699837" podUID="cc94200f18453b93e8d420d475923a00"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: E0804 09:14:11.218396   23032 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: I0804 09:14:11.637647   23032 kubelet_node_status.go:75] "Attempting to register node" node="functional-699837"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: E0804 09:14:11.638029   23032 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-699837"
	Aug 04 09:14:11 functional-699837 kubelet[23032]: E0804 09:14:11.997440   23032 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Aug 04 09:14:12 functional-699837 kubelet[23032]: E0804 09:14:12.610748   23032 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-699837?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Aug 04 09:14:12 functional-699837 kubelet[23032]: E0804 09:14:12.694152   23032 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-699837\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-699837 -n functional-699837: exit status 2 (370.685045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-699837" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/NodeLabels (1.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/DeployApp (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-699837 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1456: (dbg) Non-zero exit: kubectl --context functional-699837 create deployment hello-node --image=registry.k8s.io/echoserver:1.8: exit status 1 (67.427014ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1460: failed to create hello-node deployment with this command "kubectl --context functional-699837 create deployment hello-node --image=registry.k8s.io/echoserver:1.8": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/DeployApp (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 service list
functional_test.go:1476: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-699837 service list: exit status 103 (256.499106ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-699837 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-699837"

                                                
                                                
-- /stdout --
functional_test.go:1478: failed to do service list. args "out/minikube-linux-amd64 -p functional-699837 service list" : exit status 103
functional_test.go:1481: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-699837 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-699837\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 service list -o json
functional_test.go:1506: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-699837 service list -o json: exit status 103 (271.899652ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-699837 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-699837"

                                                
                                                
-- /stdout --
functional_test.go:1508: failed to list services with json format. args "out/minikube-linux-amd64 -p functional-699837 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MountCmd/any-port (2.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdany-port3821011156/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1754298848740038253" to /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdany-port3821011156/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1754298848740038253" to /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdany-port3821011156/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1754298848740038253" to /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdany-port3821011156/001/test-1754298848740038253
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-699837 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (329.111999ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0804 09:14:09.069522 1582690 retry.go:31] will retry after 457.582149ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  4 09:14 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  4 09:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  4 09:14 test-1754298848740038253
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh cat /mount-9p/test-1754298848740038253
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-699837 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-699837 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (59.983848ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-699837 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-699837 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (288.718211ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=999,access=any,msize=262144,trans=tcp,noextend,port=43013)
	total 2
	-rw-r--r-- 1 docker docker 24 Aug  4 09:14 created-by-test
	-rw-r--r-- 1 docker docker 24 Aug  4 09:14 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Aug  4 09:14 test-1754298848740038253
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-699837 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdany-port3821011156/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdany-port3821011156/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdany-port3821011156/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:43013
* Userspace file server: ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdany-port3821011156/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdany-port3821011156/001:/mount-9p --alsologtostderr -v=1] stderr:
I0804 09:14:08.796741 1680826 out.go:345] Setting OutFile to fd 1 ...
I0804 09:14:08.796968 1680826 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 09:14:08.796981 1680826 out.go:358] Setting ErrFile to fd 2...
I0804 09:14:08.796987 1680826 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 09:14:08.797294 1680826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
I0804 09:14:08.797631 1680826 mustload.go:65] Loading cluster: functional-699837
I0804 09:14:08.797995 1680826 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
I0804 09:14:08.798378 1680826 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
I0804 09:14:08.817283 1680826 host.go:66] Checking if "functional-699837" exists ...
I0804 09:14:08.817575 1680826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0804 09:14:08.924293 1680826 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:55 SystemTime:2025-08-04 09:14:08.901194859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0804 09:14:08.924559 1680826 cli_runner.go:164] Run: docker network inspect functional-699837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0804 09:14:08.948991 1680826 out.go:177] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdany-port3821011156/001 into VM as /mount-9p ...
I0804 09:14:08.950400 1680826 out.go:177]   - Mount type:   9p
I0804 09:14:08.953433 1680826 out.go:177]   - User ID:      docker
I0804 09:14:08.954349 1680826 out.go:177]   - Group ID:     docker
I0804 09:14:08.955416 1680826 out.go:177]   - Version:      9p2000.L
I0804 09:14:08.956593 1680826 out.go:177]   - Message Size: 262144
I0804 09:14:08.957656 1680826 out.go:177]   - Options:      map[]
I0804 09:14:08.958892 1680826 out.go:177]   - Bind Address: 192.168.49.1:43013
I0804 09:14:08.960123 1680826 out.go:177] * Userspace file server: 
I0804 09:14:08.960369 1680826 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I0804 09:14:08.960474 1680826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
I0804 09:14:08.982121 1680826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
I0804 09:14:09.076370 1680826 mount.go:180] unmount for /mount-9p ran successfully
I0804 09:14:09.076398 1680826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I0804 09:14:09.086551 1680826 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=43013,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I0804 09:14:09.099420 1680826 main.go:125] stdlog: ufs.go:141 connected
I0804 09:14:09.099610 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tversion tag 65535 msize 262144 version '9P2000.L'
I0804 09:14:09.099677 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rversion tag 65535 msize 262144 version '9P2000'
I0804 09:14:09.099957 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I0804 09:14:09.100047 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rattach tag 0 aqid (20fc227 745c15e0 'd')
I0804 09:14:09.100387 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 0
I0804 09:14:09.100521 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fc227 745c15e0 'd') m d775 at 0 mt 1754298848 l 4096 t 0 d 0 ext )
I0804 09:14:09.102354 1680826 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/.mount-process: {Name:mkb5d1e3601b6c7f8cf3a4593d2a9c25ab2dc0f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0804 09:14:09.102586 1680826 mount.go:105] mount successful: ""
I0804 09:14:09.104245 1680826 out.go:177] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdany-port3821011156/001 to /mount-9p
I0804 09:14:09.105432 1680826 out.go:201] 
I0804 09:14:09.106389 1680826 out.go:177] * NOTE: This process must stay alive for the mount to be accessible ...
I0804 09:14:10.105937 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 0
I0804 09:14:10.106097 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fc227 745c15e0 'd') m d775 at 0 mt 1754298848 l 4096 t 0 d 0 ext )
I0804 09:14:10.106583 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Twalk tag 0 fid 0 newfid 1 
I0804 09:14:10.106660 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rwalk tag 0 
I0804 09:14:10.106851 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Topen tag 0 fid 1 mode 0
I0804 09:14:10.106935 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Ropen tag 0 qid (20fc227 745c15e0 'd') iounit 0
I0804 09:14:10.107082 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 0
I0804 09:14:10.107217 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fc227 745c15e0 'd') m d775 at 0 mt 1754298848 l 4096 t 0 d 0 ext )
I0804 09:14:10.107554 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tread tag 0 fid 1 offset 0 count 262120
I0804 09:14:10.107747 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rread tag 0 count 258
I0804 09:14:10.107911 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tread tag 0 fid 1 offset 258 count 261862
I0804 09:14:10.107955 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rread tag 0 count 0
I0804 09:14:10.108100 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tread tag 0 fid 1 offset 258 count 262120
I0804 09:14:10.108142 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rread tag 0 count 0
I0804 09:14:10.108293 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Twalk tag 0 fid 0 newfid 2 0:'test-1754298848740038253' 
I0804 09:14:10.108339 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rwalk tag 0 (20fc22a 745c15e0 '') 
I0804 09:14:10.108450 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 2
I0804 09:14:10.108552 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('test-1754298848740038253' 'jenkins' 'balintp' '' q (20fc22a 745c15e0 '') m 644 at 0 mt 1754298848 l 24 t 0 d 0 ext )
I0804 09:14:10.108686 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 2
I0804 09:14:10.108757 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('test-1754298848740038253' 'jenkins' 'balintp' '' q (20fc22a 745c15e0 '') m 644 at 0 mt 1754298848 l 24 t 0 d 0 ext )
I0804 09:14:10.108876 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tclunk tag 0 fid 2
I0804 09:14:10.108922 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rclunk tag 0
I0804 09:14:10.109051 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0804 09:14:10.109079 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rwalk tag 0 (20fc229 745c15e0 '') 
I0804 09:14:10.109178 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 2
I0804 09:14:10.109301 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fc229 745c15e0 '') m 644 at 0 mt 1754298848 l 24 t 0 d 0 ext )
I0804 09:14:10.109468 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 2
I0804 09:14:10.109562 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fc229 745c15e0 '') m 644 at 0 mt 1754298848 l 24 t 0 d 0 ext )
I0804 09:14:10.109696 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tclunk tag 0 fid 2
I0804 09:14:10.109736 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rclunk tag 0
I0804 09:14:10.109887 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0804 09:14:10.109925 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rwalk tag 0 (20fc228 745c15e0 '') 
I0804 09:14:10.110063 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 2
I0804 09:14:10.110166 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fc228 745c15e0 '') m 644 at 0 mt 1754298848 l 24 t 0 d 0 ext )
I0804 09:14:10.110297 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 2
I0804 09:14:10.110376 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fc228 745c15e0 '') m 644 at 0 mt 1754298848 l 24 t 0 d 0 ext )
I0804 09:14:10.110514 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tclunk tag 0 fid 2
I0804 09:14:10.110551 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rclunk tag 0
I0804 09:14:10.110654 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tread tag 0 fid 1 offset 258 count 262120
I0804 09:14:10.110678 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rread tag 0 count 0
I0804 09:14:10.110807 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tclunk tag 0 fid 1
I0804 09:14:10.110851 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rclunk tag 0
I0804 09:14:10.413485 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Twalk tag 0 fid 0 newfid 1 0:'test-1754298848740038253' 
I0804 09:14:10.413586 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rwalk tag 0 (20fc22a 745c15e0 '') 
I0804 09:14:10.413773 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 1
I0804 09:14:10.413923 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('test-1754298848740038253' 'jenkins' 'balintp' '' q (20fc22a 745c15e0 '') m 644 at 0 mt 1754298848 l 24 t 0 d 0 ext )
I0804 09:14:10.414068 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Twalk tag 0 fid 1 newfid 2 
I0804 09:14:10.414117 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rwalk tag 0 
I0804 09:14:10.414254 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Topen tag 0 fid 2 mode 0
I0804 09:14:10.414320 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Ropen tag 0 qid (20fc22a 745c15e0 '') iounit 0
I0804 09:14:10.414433 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 1
I0804 09:14:10.414531 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('test-1754298848740038253' 'jenkins' 'balintp' '' q (20fc22a 745c15e0 '') m 644 at 0 mt 1754298848 l 24 t 0 d 0 ext )
I0804 09:14:10.414656 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tread tag 0 fid 2 offset 0 count 262120
I0804 09:14:10.414721 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rread tag 0 count 24
I0804 09:14:10.414824 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tread tag 0 fid 2 offset 24 count 262120
I0804 09:14:10.414860 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rread tag 0 count 0
I0804 09:14:10.414962 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tread tag 0 fid 2 offset 24 count 262120
I0804 09:14:10.415035 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rread tag 0 count 0
I0804 09:14:10.415209 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tclunk tag 0 fid 2
I0804 09:14:10.415245 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rclunk tag 0
I0804 09:14:10.415451 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tclunk tag 0 fid 1
I0804 09:14:10.415482 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rclunk tag 0
I0804 09:14:10.762005 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 0
I0804 09:14:10.762175 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fc227 745c15e0 'd') m d775 at 0 mt 1754298848 l 4096 t 0 d 0 ext )
I0804 09:14:10.762641 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Twalk tag 0 fid 0 newfid 1 
I0804 09:14:10.762700 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rwalk tag 0 
I0804 09:14:10.762867 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Topen tag 0 fid 1 mode 0
I0804 09:14:10.762941 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Ropen tag 0 qid (20fc227 745c15e0 'd') iounit 0
I0804 09:14:10.763095 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 0
I0804 09:14:10.763203 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fc227 745c15e0 'd') m d775 at 0 mt 1754298848 l 4096 t 0 d 0 ext )
I0804 09:14:10.763464 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tread tag 0 fid 1 offset 0 count 262120
I0804 09:14:10.763668 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rread tag 0 count 258
I0804 09:14:10.763822 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tread tag 0 fid 1 offset 258 count 261862
I0804 09:14:10.763863 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rread tag 0 count 0
I0804 09:14:10.764027 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tread tag 0 fid 1 offset 258 count 262120
I0804 09:14:10.764086 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rread tag 0 count 0
I0804 09:14:10.764249 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Twalk tag 0 fid 0 newfid 2 0:'test-1754298848740038253' 
I0804 09:14:10.764297 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rwalk tag 0 (20fc22a 745c15e0 '') 
I0804 09:14:10.764430 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 2
I0804 09:14:10.764525 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('test-1754298848740038253' 'jenkins' 'balintp' '' q (20fc22a 745c15e0 '') m 644 at 0 mt 1754298848 l 24 t 0 d 0 ext )
I0804 09:14:10.764677 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 2
I0804 09:14:10.764772 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('test-1754298848740038253' 'jenkins' 'balintp' '' q (20fc22a 745c15e0 '') m 644 at 0 mt 1754298848 l 24 t 0 d 0 ext )
I0804 09:14:10.764933 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tclunk tag 0 fid 2
I0804 09:14:10.764967 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rclunk tag 0
I0804 09:14:10.765134 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0804 09:14:10.765175 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rwalk tag 0 (20fc229 745c15e0 '') 
I0804 09:14:10.765311 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 2
I0804 09:14:10.765404 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fc229 745c15e0 '') m 644 at 0 mt 1754298848 l 24 t 0 d 0 ext )
I0804 09:14:10.765542 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 2
I0804 09:14:10.765610 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fc229 745c15e0 '') m 644 at 0 mt 1754298848 l 24 t 0 d 0 ext )
I0804 09:14:10.765740 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tclunk tag 0 fid 2
I0804 09:14:10.765788 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rclunk tag 0
I0804 09:14:10.765943 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0804 09:14:10.765988 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rwalk tag 0 (20fc228 745c15e0 '') 
I0804 09:14:10.766100 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 2
I0804 09:14:10.766187 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fc228 745c15e0 '') m 644 at 0 mt 1754298848 l 24 t 0 d 0 ext )
I0804 09:14:10.766327 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tstat tag 0 fid 2
I0804 09:14:10.766425 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fc228 745c15e0 '') m 644 at 0 mt 1754298848 l 24 t 0 d 0 ext )
I0804 09:14:10.766561 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tclunk tag 0 fid 2
I0804 09:14:10.766598 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rclunk tag 0
I0804 09:14:10.766716 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tread tag 0 fid 1 offset 258 count 262120
I0804 09:14:10.766747 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rread tag 0 count 0
I0804 09:14:10.766879 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tclunk tag 0 fid 1
I0804 09:14:10.766908 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rclunk tag 0
I0804 09:14:10.768153 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I0804 09:14:10.768203 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rerror tag 0 ename 'file not found' ecode 0
I0804 09:14:11.048427 1680826 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:54396 Tclunk tag 0 fid 0
I0804 09:14:11.048474 1680826 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:54396 Rclunk tag 0
I0804 09:14:11.053467 1680826 main.go:125] stdlog: ufs.go:147 disconnected
I0804 09:14:11.073997 1680826 out.go:177] * Unmounting /mount-9p ...
I0804 09:14:11.075292 1680826 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I0804 09:14:11.083249 1680826 mount.go:180] unmount for /mount-9p ran successfully
I0804 09:14:11.083384 1680826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/.mount-process: {Name:mkb5d1e3601b6c7f8cf3a4593d2a9c25ab2dc0f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0804 09:14:11.084898 1680826 out.go:201] 
W0804 09:14:11.085898 1680826 out.go:270] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I0804 09:14:11.086956 1680826 out.go:201] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MountCmd/any-port (2.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 service --namespace=default --https --url hello-node
functional_test.go:1526: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-699837 service --namespace=default --https --url hello-node: exit status 103 (317.249253ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-699837 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-699837"

                                                
                                                
-- /stdout --
functional_test.go:1528: failed to get service url. args "out/minikube-linux-amd64 -p functional-699837 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 service hello-node --url --format={{.IP}}
functional_test.go:1557: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-699837 service hello-node --url --format={{.IP}}: exit status 103 (265.04309ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-699837 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-699837"

                                                
                                                
-- /stdout --
functional_test.go:1559: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-699837 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1565: "* The control-plane node functional-699837 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-699837\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 service hello-node --url
functional_test.go:1576: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-699837 service hello-node --url: exit status 103 (255.640581ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-699837 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-699837"

                                                
                                                
-- /stdout --
functional_test.go:1578: failed to get service url. args: "out/minikube-linux-amd64 -p functional-699837 service hello-node --url": exit status 103
functional_test.go:1582: found endpoint for hello-node: * The control-plane node functional-699837 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-699837"
functional_test.go:1586: failed to parse "* The control-plane node functional-699837 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-699837\"": parse "* The control-plane node functional-699837 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-699837\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DockerEnv/bash (0.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DockerEnv/bash
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-699837 docker-env) && out/minikube-linux-amd64 status -p functional-699837"
functional_test.go:516: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-699837 docker-env) && out/minikube-linux-amd64 status -p functional-699837": exit status 2 (621.104074ms)

                                                
                                                
-- stdout --
	functional-699837
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	docker-env: in-use
	

                                                
                                                
-- /stdout --
functional_test.go:522: failed to do status after eval-ing docker-env. error: exit status 2
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DockerEnv/bash (0.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-699837 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-699837 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I0804 09:14:16.368928 1687682 out.go:345] Setting OutFile to fd 1 ...
I0804 09:14:16.369064 1687682 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 09:14:16.369114 1687682 out.go:358] Setting ErrFile to fd 2...
I0804 09:14:16.369132 1687682 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 09:14:16.369368 1687682 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
I0804 09:14:16.369626 1687682 mustload.go:65] Loading cluster: functional-699837
I0804 09:14:16.370025 1687682 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
I0804 09:14:16.370475 1687682 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
I0804 09:14:16.393662 1687682 host.go:66] Checking if "functional-699837" exists ...
I0804 09:14:16.393870 1687682 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0804 09:14:16.467365 1687682 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 09:14:16.45499714 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0804 09:14:16.467527 1687682 api_server.go:166] Checking apiserver status ...
I0804 09:14:16.467604 1687682 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 09:14:16.467654 1687682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
I0804 09:14:16.486109 1687682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
W0804 09:14:16.581905 1687682 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0804 09:14:16.584193 1687682 out.go:177] * The control-plane node functional-699837 apiserver is not running: (state=Stopped)
I0804 09:14:16.586278 1687682 out.go:177]   To start a cluster, run: "minikube start -p functional-699837"

                                                
                                                
stdout: * The control-plane node functional-699837 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-699837"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-699837 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-699837 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-699837 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-699837 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-699837 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-699837 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-699837 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-699837 apply -f testdata/testsvc.yaml: exit status 1 (61.865898ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-699837 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (116.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.107.4.181": Temporary Error: Get "http://10.107.4.181": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-699837 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-699837 get svc nginx-svc: exit status 1 (48.114062ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-699837 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (116.36s)

                                                
                                    
x
+
TestKubernetesUpgrade (805.99s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-402519 --memory=3072 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-402519 --memory=3072 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (40.981653812s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-402519
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-402519: (1.190018656s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-402519 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-402519 status --format={{.Host}}: exit status 7 (69.130096ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-402519 --memory=3072 --kubernetes-version=v1.34.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0804 09:45:41.678261 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-402519 --memory=3072 --kubernetes-version=v1.34.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: exit status 80 (12m40.12415652s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-402519] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-402519" primary control-plane node in "kubernetes-upgrade-402519" cluster
	* Pulling base image v0.0.47-1753871403-21198 ...
	* Restarting existing docker container for "kubernetes-upgrade-402519" ...
	* Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...  - Generating certificates and keys ...  - Booting up control plane ...  - Generating certificates and keys ...  - Booting up control plane ...
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 09:45:10.949336 1914687 out.go:345] Setting OutFile to fd 1 ...
	I0804 09:45:10.949637 1914687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:45:10.949652 1914687 out.go:358] Setting ErrFile to fd 2...
	I0804 09:45:10.949659 1914687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:45:10.949923 1914687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 09:45:10.950508 1914687 out.go:352] Setting JSON to false
	I0804 09:45:10.951666 1914687 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":152800,"bootTime":1754147911,"procs":258,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 09:45:10.951799 1914687 start.go:140] virtualization: kvm guest
	I0804 09:45:10.953855 1914687 out.go:177] * [kubernetes-upgrade-402519] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 09:45:10.955160 1914687 notify.go:220] Checking for updates...
	I0804 09:45:10.955170 1914687 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 09:45:10.956462 1914687 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 09:45:10.957581 1914687 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 09:45:10.958664 1914687 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 09:45:10.959658 1914687 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 09:45:10.960606 1914687 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 09:45:10.961933 1914687 config.go:182] Loaded profile config "kubernetes-upgrade-402519": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0804 09:45:10.962395 1914687 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 09:45:10.983975 1914687 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 09:45:10.984043 1914687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:45:11.036723 1914687 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 09:45:11.02764177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:45:11.036831 1914687 docker.go:318] overlay module found
	I0804 09:45:11.038565 1914687 out.go:177] * Using the docker driver based on existing profile
	I0804 09:45:11.039621 1914687 start.go:304] selected driver: docker
	I0804 09:45:11.039633 1914687 start.go:918] validating driver "docker" against &{Name:kubernetes-upgrade-402519 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-402519 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:45:11.039706 1914687 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 09:45:11.040493 1914687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:45:11.088042 1914687 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 09:45:11.078593385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:45:11.088403 1914687 cni.go:84] Creating CNI manager for ""
	I0804 09:45:11.088467 1914687 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:45:11.088506 1914687 start.go:348] cluster config:
	{Name:kubernetes-upgrade-402519 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:kubernetes-upgrade-402519 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:45:11.090125 1914687 out.go:177] * Starting "kubernetes-upgrade-402519" primary control-plane node in "kubernetes-upgrade-402519" cluster
	I0804 09:45:11.091174 1914687 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 09:45:11.092203 1914687 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 09:45:11.093140 1914687 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 09:45:11.093176 1914687 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 09:45:11.093182 1914687 cache.go:56] Caching tarball of preloaded images
	I0804 09:45:11.093248 1914687 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 09:45:11.093300 1914687 preload.go:172] Found /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 09:45:11.093313 1914687 cache.go:59] Finished verifying existence of preloaded tar for v1.34.0-beta.0 on docker
	I0804 09:45:11.093435 1914687 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubernetes-upgrade-402519/config.json ...
	I0804 09:45:11.111980 1914687 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 09:45:11.112004 1914687 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 09:45:11.112018 1914687 cache.go:230] Successfully downloaded all kic artifacts
	I0804 09:45:11.112049 1914687 start.go:360] acquireMachinesLock for kubernetes-upgrade-402519: {Name:mk68ee1843bbcd0ae6b8d49bebae58aa0b621ab4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 09:45:11.112107 1914687 start.go:364] duration metric: took 41.373µs to acquireMachinesLock for "kubernetes-upgrade-402519"
	I0804 09:45:11.112124 1914687 start.go:96] Skipping create...Using existing machine configuration
	I0804 09:45:11.112129 1914687 fix.go:54] fixHost starting: 
	I0804 09:45:11.112316 1914687 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-402519 --format={{.State.Status}}
	I0804 09:45:11.128633 1914687 fix.go:112] recreateIfNeeded on kubernetes-upgrade-402519: state=Stopped err=<nil>
	W0804 09:45:11.128658 1914687 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 09:45:11.130328 1914687 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-402519" ...
	I0804 09:45:11.131446 1914687 cli_runner.go:164] Run: docker start kubernetes-upgrade-402519
	I0804 09:45:11.355074 1914687 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-402519 --format={{.State.Status}}
	I0804 09:45:11.373659 1914687 kic.go:430] container "kubernetes-upgrade-402519" state is running.
	I0804 09:45:11.374197 1914687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-402519
	I0804 09:45:11.392910 1914687 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubernetes-upgrade-402519/config.json ...
	I0804 09:45:11.393132 1914687 machine.go:93] provisionDockerMachine start ...
	I0804 09:45:11.393207 1914687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-402519
	I0804 09:45:11.411069 1914687 main.go:141] libmachine: Using SSH client type: native
	I0804 09:45:11.411395 1914687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I0804 09:45:11.411415 1914687 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 09:45:11.412051 1914687 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58704->127.0.0.1:32998: read: connection reset by peer
	I0804 09:45:14.540421 1914687 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-402519
	
	I0804 09:45:14.540447 1914687 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-402519"
	I0804 09:45:14.540510 1914687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-402519
	I0804 09:45:14.557903 1914687 main.go:141] libmachine: Using SSH client type: native
	I0804 09:45:14.558107 1914687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I0804 09:45:14.558121 1914687 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-402519 && echo "kubernetes-upgrade-402519" | sudo tee /etc/hostname
	I0804 09:45:14.696153 1914687 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-402519
	
	I0804 09:45:14.696235 1914687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-402519
	I0804 09:45:14.713156 1914687 main.go:141] libmachine: Using SSH client type: native
	I0804 09:45:14.713458 1914687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I0804 09:45:14.713479 1914687 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-402519' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-402519/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-402519' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 09:45:14.837272 1914687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 09:45:14.837318 1914687 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 09:45:14.837348 1914687 ubuntu.go:177] setting up certificates
	I0804 09:45:14.837370 1914687 provision.go:84] configureAuth start
	I0804 09:45:14.837456 1914687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-402519
	I0804 09:45:14.854377 1914687 provision.go:143] copyHostCerts
	I0804 09:45:14.854458 1914687 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 09:45:14.854471 1914687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 09:45:14.854550 1914687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 09:45:14.854673 1914687 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 09:45:14.854686 1914687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 09:45:14.854722 1914687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 09:45:14.854818 1914687 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 09:45:14.854827 1914687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 09:45:14.854858 1914687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 09:45:14.854945 1914687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-402519 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-402519 localhost minikube]
	I0804 09:45:15.605517 1914687 provision.go:177] copyRemoteCerts
	I0804 09:45:15.605578 1914687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 09:45:15.605613 1914687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-402519
	I0804 09:45:15.622818 1914687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/kubernetes-upgrade-402519/id_rsa Username:docker}
	I0804 09:45:15.713787 1914687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0804 09:45:15.735484 1914687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 09:45:15.756688 1914687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 09:45:15.778270 1914687 provision.go:87] duration metric: took 940.883675ms to configureAuth
	I0804 09:45:15.778300 1914687 ubuntu.go:193] setting minikube options for container-runtime
	I0804 09:45:15.778460 1914687 config.go:182] Loaded profile config "kubernetes-upgrade-402519": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:45:15.778505 1914687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-402519
	I0804 09:45:15.796510 1914687 main.go:141] libmachine: Using SSH client type: native
	I0804 09:45:15.796747 1914687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I0804 09:45:15.796759 1914687 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 09:45:15.921637 1914687 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 09:45:15.921663 1914687 ubuntu.go:71] root file system type: overlay
	I0804 09:45:15.921767 1914687 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 09:45:15.921834 1914687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-402519
	I0804 09:45:15.939149 1914687 main.go:141] libmachine: Using SSH client type: native
	I0804 09:45:15.939361 1914687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I0804 09:45:15.939422 1914687 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 09:45:16.072548 1914687 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 09:45:16.072641 1914687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-402519
	I0804 09:45:16.091744 1914687 main.go:141] libmachine: Using SSH client type: native
	I0804 09:45:16.091953 1914687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I0804 09:45:16.091970 1914687 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 09:45:16.217829 1914687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 09:45:16.217857 1914687 machine.go:96] duration metric: took 4.824709005s to provisionDockerMachine
	I0804 09:45:16.217867 1914687 start.go:293] postStartSetup for "kubernetes-upgrade-402519" (driver="docker")
	I0804 09:45:16.217878 1914687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 09:45:16.217936 1914687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 09:45:16.217974 1914687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-402519
	I0804 09:45:16.235306 1914687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/kubernetes-upgrade-402519/id_rsa Username:docker}
	I0804 09:45:16.330578 1914687 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 09:45:16.333711 1914687 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 09:45:16.333747 1914687 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 09:45:16.333759 1914687 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 09:45:16.333768 1914687 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 09:45:16.333783 1914687 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 09:45:16.333844 1914687 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 09:45:16.333949 1914687 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 09:45:16.334086 1914687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 09:45:16.341980 1914687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 09:45:16.363240 1914687 start.go:296] duration metric: took 145.357985ms for postStartSetup
	I0804 09:45:16.363323 1914687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 09:45:16.363370 1914687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-402519
	I0804 09:45:16.380400 1914687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/kubernetes-upgrade-402519/id_rsa Username:docker}
	I0804 09:45:16.465869 1914687 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 09:45:16.470113 1914687 fix.go:56] duration metric: took 5.357976408s for fixHost
	I0804 09:45:16.470139 1914687 start.go:83] releasing machines lock for "kubernetes-upgrade-402519", held for 5.358021134s
	I0804 09:45:16.470208 1914687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-402519
	I0804 09:45:16.488295 1914687 ssh_runner.go:195] Run: cat /version.json
	I0804 09:45:16.488338 1914687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-402519
	I0804 09:45:16.488355 1914687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 09:45:16.488421 1914687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-402519
	I0804 09:45:16.507684 1914687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/kubernetes-upgrade-402519/id_rsa Username:docker}
	I0804 09:45:16.508075 1914687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/kubernetes-upgrade-402519/id_rsa Username:docker}
	I0804 09:45:16.664090 1914687 ssh_runner.go:195] Run: systemctl --version
	I0804 09:45:16.668434 1914687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 09:45:16.672655 1914687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 09:45:16.689815 1914687 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 09:45:16.689889 1914687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0804 09:45:16.704579 1914687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0804 09:45:16.719219 1914687 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 09:45:16.719243 1914687 start.go:495] detecting cgroup driver to use...
	I0804 09:45:16.719273 1914687 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 09:45:16.719371 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 09:45:16.733548 1914687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:45:17.122533 1914687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 09:45:17.132745 1914687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 09:45:17.142083 1914687 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 09:45:17.142134 1914687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 09:45:17.150979 1914687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 09:45:17.159439 1914687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 09:45:17.167770 1914687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 09:45:17.176256 1914687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 09:45:17.184248 1914687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 09:45:17.192764 1914687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 09:45:17.201112 1914687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 09:45:17.209629 1914687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 09:45:17.216697 1914687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 09:45:17.223809 1914687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:45:17.293800 1914687 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 09:45:17.381932 1914687 start.go:495] detecting cgroup driver to use...
	I0804 09:45:17.381986 1914687 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 09:45:17.382038 1914687 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 09:45:17.393088 1914687 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 09:45:17.393162 1914687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 09:45:17.404827 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 09:45:17.420721 1914687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:45:17.816099 1914687 ssh_runner.go:195] Run: which cri-dockerd
	I0804 09:45:17.819835 1914687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 09:45:17.827862 1914687 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 09:45:17.844944 1914687 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 09:45:17.922790 1914687 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 09:45:17.998712 1914687 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 09:45:17.998831 1914687 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 09:45:18.016026 1914687 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 09:45:18.025817 1914687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:45:18.102508 1914687 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 09:45:18.406914 1914687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 09:45:18.418275 1914687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 09:45:18.429001 1914687 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 09:45:18.506755 1914687 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 09:45:18.578239 1914687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:45:18.650442 1914687 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 09:45:18.662614 1914687 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 09:45:18.672302 1914687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:45:18.751164 1914687 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 09:45:18.810065 1914687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 09:45:18.820611 1914687 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 09:45:18.820682 1914687 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 09:45:18.823875 1914687 start.go:563] Will wait 60s for crictl version
	I0804 09:45:18.823916 1914687 ssh_runner.go:195] Run: which crictl
	I0804 09:45:18.826918 1914687 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 09:45:18.858805 1914687 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 09:45:18.858870 1914687 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 09:45:18.882393 1914687 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 09:45:18.907924 1914687 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 09:45:18.908006 1914687 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-402519 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 09:45:18.924789 1914687 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0804 09:45:18.928484 1914687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 09:45:18.938827 1914687 kubeadm.go:875] updating cluster {Name:kubernetes-upgrade-402519 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:kubernetes-upgrade-402519 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 09:45:18.939030 1914687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:45:19.337033 1914687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:45:19.743910 1914687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:45:20.148310 1914687 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 09:45:20.148469 1914687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:45:20.548677 1914687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:45:20.955345 1914687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:45:21.376657 1914687 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 09:45:21.398313 1914687 docker.go:703] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0804 09:45:21.398337 1914687 docker.go:709] registry.k8s.io/kube-apiserver:v1.34.0-beta.0 wasn't preloaded
	I0804 09:45:21.398387 1914687 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0804 09:45:21.407632 1914687 ssh_runner.go:195] Run: which lz4
	I0804 09:45:21.411132 1914687 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0804 09:45:21.414425 1914687 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 09:45:21.414462 1914687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (327421044 bytes)
	I0804 09:45:22.155662 1914687 docker.go:667] duration metric: took 744.55679ms to copy over tarball
	I0804 09:45:22.155751 1914687 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 09:45:24.159827 1914687 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.004028888s)
	I0804 09:45:24.159867 1914687 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 09:45:24.236815 1914687 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0804 09:45:24.248424 1914687 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (4789 bytes)
	I0804 09:45:24.271501 1914687 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 09:45:24.285956 1914687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:45:24.396388 1914687 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 09:45:30.808067 1914687 ssh_runner.go:235] Completed: sudo systemctl restart docker: (6.411639702s)
	I0804 09:45:30.808175 1914687 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 09:45:30.833580 1914687 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0804 09:45:30.833603 1914687 cache_images.go:85] Images are preloaded, skipping loading
	I0804 09:45:30.833615 1914687 kubeadm.go:926] updating node { 192.168.85.2 8443 v1.34.0-beta.0 docker true true} ...
	I0804 09:45:30.833745 1914687 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-402519 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:kubernetes-upgrade-402519 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 09:45:30.833804 1914687 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 09:45:30.902427 1914687 cni.go:84] Creating CNI manager for ""
	I0804 09:45:30.902453 1914687 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:45:30.902464 1914687 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 09:45:30.902482 1914687 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-402519 NodeName:kubernetes-upgrade-402519 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 09:45:30.902610 1914687 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-402519"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 09:45:30.902664 1914687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 09:45:30.913552 1914687 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 09:45:30.913613 1914687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 09:45:30.923827 1914687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (331 bytes)
	I0804 09:45:30.944008 1914687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 09:45:30.963950 1914687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2310 bytes)
	I0804 09:45:30.985400 1914687 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0804 09:45:30.989745 1914687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 09:45:31.003300 1914687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:45:31.100807 1914687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 09:45:31.118257 1914687 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubernetes-upgrade-402519 for IP: 192.168.85.2
	I0804 09:45:31.118282 1914687 certs.go:194] generating shared ca certs ...
	I0804 09:45:31.118312 1914687 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:45:31.118477 1914687 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 09:45:31.118536 1914687 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 09:45:31.118554 1914687 certs.go:256] generating profile certs ...
	I0804 09:45:31.118670 1914687 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubernetes-upgrade-402519/client.key
	I0804 09:45:31.118751 1914687 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubernetes-upgrade-402519/apiserver.key.60b92b53
	I0804 09:45:31.118806 1914687 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubernetes-upgrade-402519/proxy-client.key
	I0804 09:45:31.118944 1914687 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 09:45:31.118984 1914687 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 09:45:31.118997 1914687 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 09:45:31.119028 1914687 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 09:45:31.119058 1914687 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 09:45:31.119088 1914687 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 09:45:31.119143 1914687 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 09:45:31.119925 1914687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 09:45:31.148537 1914687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 09:45:31.181114 1914687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 09:45:31.208827 1914687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 09:45:31.270999 1914687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubernetes-upgrade-402519/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0804 09:45:31.304992 1914687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubernetes-upgrade-402519/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 09:45:31.331260 1914687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubernetes-upgrade-402519/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 09:45:31.359508 1914687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubernetes-upgrade-402519/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 09:45:31.387587 1914687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 09:45:31.415549 1914687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 09:45:31.442451 1914687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 09:45:31.471190 1914687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 09:45:31.491722 1914687 ssh_runner.go:195] Run: openssl version
	I0804 09:45:31.498269 1914687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 09:45:31.510117 1914687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 09:45:31.513717 1914687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 09:45:31.513774 1914687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 09:45:31.522399 1914687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 09:45:31.532910 1914687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 09:45:31.544134 1914687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 09:45:31.548031 1914687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 09:45:31.548087 1914687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 09:45:31.556815 1914687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 09:45:31.567082 1914687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 09:45:31.578460 1914687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:45:31.582420 1914687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:45:31.582478 1914687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:45:31.590877 1914687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 09:45:31.601676 1914687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 09:45:31.605659 1914687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 09:45:31.613738 1914687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 09:45:31.621627 1914687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 09:45:31.629029 1914687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 09:45:31.636207 1914687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 09:45:31.644138 1914687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 09:45:31.651782 1914687 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-402519 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:kubernetes-upgrade-402519 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Dis
ableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:45:31.651925 1914687 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 09:45:31.674938 1914687 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 09:45:31.686404 1914687 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 09:45:31.686427 1914687 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0804 09:45:31.686517 1914687 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 09:45:31.698261 1914687 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 09:45:31.698980 1914687 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-402519" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 09:45:31.699341 1914687 kubeconfig.go:62] /home/jenkins/minikube-integration/21223-1578987/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-402519" cluster setting kubeconfig missing "kubernetes-upgrade-402519" context setting]
	I0804 09:45:31.700035 1914687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:45:31.700859 1914687 kapi.go:59] client config for kubernetes-upgrade-402519: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubernetes-upgrade-402519/client.crt", KeyFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubernetes-upgrade-402519/client.key", CAFile:"/home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2595680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0804 09:45:31.701538 1914687 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0804 09:45:31.701562 1914687 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0804 09:45:31.701570 1914687 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0804 09:45:31.701576 1914687 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0804 09:45:31.701582 1914687 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0804 09:45:31.701986 1914687 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 09:45:31.712377 1914687 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-08-04 09:44:54.908830011 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-08-04 09:45:30.979343712 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta2
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -11,36 +11,40 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/dockershim.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "kubernetes-upgrade-402519"
	   kubeletExtraArgs:
	-    node-ip: 192.168.85.2
	+    - name: "node-ip"
	+      value: "192.168.85.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta2
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.20.0
	+      - name: "proxy-refresh-interval"
	+        value: "70000"
	+kubernetesVersion: v1.34.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	@@ -52,6 +56,7 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: cgroupfs
	+containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	 hairpinMode: hairpin-veth
	 runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	
	-- /stdout --
	I0804 09:45:31.712401 1914687 kubeadm.go:1152] stopping kube-system containers ...
	I0804 09:45:31.712455 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 09:45:31.738742 1914687 docker.go:496] Stopping containers: [cc5bed820423 4a93264af8b9 3bf4e03f1d1e 61b35865b3b0 2004527ded86 aba1a7e2bb9b aa54ee8a699b 1a7af5ce7a64]
	I0804 09:45:31.738798 1914687 ssh_runner.go:195] Run: docker stop cc5bed820423 4a93264af8b9 3bf4e03f1d1e 61b35865b3b0 2004527ded86 aba1a7e2bb9b aa54ee8a699b 1a7af5ce7a64
	I0804 09:45:31.761766 1914687 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 09:45:31.877758 1914687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 09:45:31.888985 1914687 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5615 Aug  4 09:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5632 Aug  4 09:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Aug  4 09:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5580 Aug  4 09:44 /etc/kubernetes/scheduler.conf
	
	I0804 09:45:31.889072 1914687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 09:45:31.899106 1914687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 09:45:31.908733 1914687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 09:45:31.919211 1914687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0804 09:45:31.919271 1914687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 09:45:31.928914 1914687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 09:45:31.938541 1914687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0804 09:45:31.938592 1914687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 09:45:31.948328 1914687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 09:45:31.958518 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:45:32.007431 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:45:33.241588 1914687 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.234116889s)
	I0804 09:45:33.241625 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:45:33.422585 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:45:33.489391 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 09:45:33.584111 1914687 api_server.go:52] waiting for apiserver process to appear ...
	I0804 09:45:33.584294 1914687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:45:34.084377 1914687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:45:34.584666 1914687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:45:35.084647 1914687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:45:35.584979 1914687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:45:36.084804 1914687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:45:36.584513 1914687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:45:37.085227 1914687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:45:37.096431 1914687 api_server.go:72] duration metric: took 3.512329206s to wait for apiserver process to appear ...
	I0804 09:45:37.096460 1914687 api_server.go:88] waiting for apiserver healthz status ...
	I0804 09:45:37.096483 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:45:42.096899 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:45:42.096947 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:45:47.097621 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:45:47.097689 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:45:52.098939 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:45:52.098990 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:45:57.099519 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:45:57.099594 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:45:57.578225 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:37658->192.168.85.2:8443: read: connection reset by peer
	I0804 09:45:57.597490 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:45:57.597952 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:45:58.096607 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:45:58.097201 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:45:58.596855 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:45:58.597313 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:45:59.096995 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:45:59.097414 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:45:59.597064 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:04.597365 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:46:04.597411 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:09.601081 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:46:09.601151 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:14.604265 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:46:14.604326 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:19.606002 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:46:19.606048 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:19.696454 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:56488->192.168.85.2:8443: read: connection reset by peer
	I0804 09:46:20.097069 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:20.097539 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:20.597287 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:20.597699 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:21.097439 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:21.097919 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:21.596558 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:21.596970 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:22.096643 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:22.097108 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:22.597381 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:22.597798 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:23.097508 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:23.097827 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:23.597476 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:23.597912 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:24.096544 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:24.096906 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:24.596543 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:24.596955 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:25.096592 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:25.096990 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:25.596628 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:25.597017 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:26.096600 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:26.097028 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:26.596683 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:26.597127 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:27.096785 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:27.097229 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:27.596909 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:27.597305 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:28.096942 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:28.097479 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:28.596998 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:28.597466 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:29.097016 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:29.097551 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:29.597280 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:29.597702 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:30.097443 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:30.097846 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:30.597562 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:30.597963 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:31.096601 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:31.096970 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:31.596594 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:31.596968 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:32.096607 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:32.096976 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:32.596617 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:32.597004 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:33.096628 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:33.096974 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:33.596614 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:33.597081 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:34.097590 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:34.097994 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:34.597458 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:34.597819 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:35.097567 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:35.097942 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:35.596552 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:35.596968 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:36.097551 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:36.097929 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:36.596545 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:36.596974 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:46:37.096788 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:46:37.119415 1914687 logs.go:282] 2 containers: [f8236ecabcb5 3bf4e03f1d1e]
	I0804 09:46:37.119508 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:46:37.141466 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:46:37.141548 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:46:37.175352 1914687 logs.go:282] 0 containers: []
	W0804 09:46:37.175383 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:46:37.175450 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:46:37.199520 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:46:37.199602 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:46:37.220681 1914687 logs.go:282] 0 containers: []
	W0804 09:46:37.220708 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:46:37.220765 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:46:37.242967 1914687 logs.go:282] 3 containers: [cf41671710f4 8c531545e4c2 4a93264af8b9]
	I0804 09:46:37.243063 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:46:37.261430 1914687 logs.go:282] 0 containers: []
	W0804 09:46:37.261461 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:46:37.261521 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:46:37.281714 1914687 logs.go:282] 0 containers: []
	W0804 09:46:37.281743 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:46:37.281760 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:46:37.281775 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:46:37.343269 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:46:37.343307 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:46:37.366776 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:46:37.366807 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:46:37.403177 1914687 logs.go:123] Gathering logs for kube-controller-manager [cf41671710f4] ...
	I0804 09:46:37.403213 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf41671710f4"
	I0804 09:46:37.443152 1914687 logs.go:123] Gathering logs for kube-controller-manager [8c531545e4c2] ...
	I0804 09:46:37.443177 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c531545e4c2"
	I0804 09:46:37.469675 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:46:37.469706 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:46:37.498808 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:46:37.498840 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:46:37.528653 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:46:37.528700 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:46:37.598332 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:46:37.598355 1914687 logs.go:123] Gathering logs for kube-apiserver [f8236ecabcb5] ...
	I0804 09:46:37.598380 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8236ecabcb5"
	I0804 09:46:37.634143 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:46:37.634189 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:46:37.708419 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:46:37.708454 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:46:37.738274 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:46:37.738317 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:46:37.762755 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:46:37.762788 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:46:40.313337 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:46:45.317129 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:46:45.317262 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:46:45.336951 1914687 logs.go:282] 3 containers: [2a39d895a9d4 f8236ecabcb5 3bf4e03f1d1e]
	I0804 09:46:45.337043 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:46:45.358590 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:46:45.358687 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:46:45.376829 1914687 logs.go:282] 0 containers: []
	W0804 09:46:45.376859 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:46:45.376914 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:46:45.397013 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:46:45.397111 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:46:45.418217 1914687 logs.go:282] 0 containers: []
	W0804 09:46:45.418247 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:46:45.418317 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:46:45.439352 1914687 logs.go:282] 3 containers: [cf41671710f4 8c531545e4c2 4a93264af8b9]
	I0804 09:46:45.439412 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:46:45.462500 1914687 logs.go:282] 0 containers: []
	W0804 09:46:45.462531 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:46:45.462586 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:46:45.481140 1914687 logs.go:282] 0 containers: []
	W0804 09:46:45.481169 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:46:45.481183 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:46:45.481198 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:46:45.505408 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:46:45.505447 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:46:45.573679 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:46:45.573721 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:46:45.598697 1914687 logs.go:123] Gathering logs for kube-controller-manager [cf41671710f4] ...
	I0804 09:46:45.598726 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf41671710f4"
	I0804 09:46:45.620948 1914687 logs.go:123] Gathering logs for kube-controller-manager [8c531545e4c2] ...
	I0804 09:46:45.621046 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c531545e4c2"
	I0804 09:46:45.649114 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:46:45.649140 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:46:45.669592 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:46:45.669622 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:46:45.709978 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:46:45.710022 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:46:45.743135 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:46:45.743174 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 09:46:55.807171 1914687 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.06397502s)
	W0804 09:46:55.807215 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0804 09:46:55.807225 1914687 logs.go:123] Gathering logs for kube-apiserver [2a39d895a9d4] ...
	I0804 09:46:55.807239 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a39d895a9d4"
	I0804 09:46:55.833175 1914687 logs.go:123] Gathering logs for kube-apiserver [f8236ecabcb5] ...
	I0804 09:46:55.833206 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8236ecabcb5"
	I0804 09:46:55.857866 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:46:55.857899 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:46:55.920739 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:46:55.920775 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:46:55.955585 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:46:55.955614 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:46:58.487803 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:47:01.718471 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:59430->192.168.85.2:8443: read: connection reset by peer
	I0804 09:47:01.718608 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:47:01.740764 1914687 logs.go:282] 3 containers: [2a39d895a9d4 f8236ecabcb5 3bf4e03f1d1e]
	I0804 09:47:01.740871 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:47:01.759944 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:47:01.760025 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:47:01.778139 1914687 logs.go:282] 0 containers: []
	W0804 09:47:01.778161 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:47:01.778206 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:47:01.802151 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:47:01.802255 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:47:01.825781 1914687 logs.go:282] 0 containers: []
	W0804 09:47:01.825815 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:47:01.825869 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:47:01.843555 1914687 logs.go:282] 2 containers: [cf41671710f4 4a93264af8b9]
	I0804 09:47:01.843629 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:47:01.861699 1914687 logs.go:282] 0 containers: []
	W0804 09:47:01.861723 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:47:01.861779 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:47:01.878903 1914687 logs.go:282] 0 containers: []
	W0804 09:47:01.878927 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:47:01.878939 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:47:01.878952 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:47:01.913319 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:47:01.913352 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:47:01.935976 1914687 logs.go:123] Gathering logs for kube-controller-manager [cf41671710f4] ...
	I0804 09:47:01.936004 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf41671710f4"
	I0804 09:47:01.957059 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:47:01.957087 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:47:01.978615 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:47:01.978654 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:47:02.002773 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:47:02.002812 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:47:02.057958 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:47:02.057983 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:47:02.057998 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:47:02.081500 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:47:02.081530 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:47:02.104746 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:47:02.104773 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:47:02.142175 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:47:02.142205 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:47:02.204956 1914687 logs.go:123] Gathering logs for kube-apiserver [2a39d895a9d4] ...
	I0804 09:47:02.204988 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a39d895a9d4"
	I0804 09:47:02.230537 1914687 logs.go:123] Gathering logs for kube-apiserver [f8236ecabcb5] ...
	I0804 09:47:02.230569 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8236ecabcb5"
	I0804 09:47:02.255729 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:47:02.255760 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:47:04.803902 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:47:04.804358 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:47:04.804474 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:47:04.825926 1914687 logs.go:282] 2 containers: [2a39d895a9d4 3bf4e03f1d1e]
	I0804 09:47:04.826004 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:47:04.845110 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:47:04.845201 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:47:04.864774 1914687 logs.go:282] 0 containers: []
	W0804 09:47:04.864801 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:47:04.864853 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:47:04.886867 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:47:04.886962 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:47:04.914930 1914687 logs.go:282] 0 containers: []
	W0804 09:47:04.914962 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:47:04.915017 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:47:04.936906 1914687 logs.go:282] 2 containers: [cf41671710f4 4a93264af8b9]
	I0804 09:47:04.936975 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:47:04.957155 1914687 logs.go:282] 0 containers: []
	W0804 09:47:04.957178 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:47:04.957218 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:47:04.979040 1914687 logs.go:282] 0 containers: []
	W0804 09:47:04.979064 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:47:04.979079 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:47:04.979093 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:47:05.005721 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:47:05.005751 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:47:05.050837 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:47:05.050880 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:47:05.133550 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:47:05.133590 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:47:05.205056 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:47:05.205074 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:47:05.205088 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:47:05.254735 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:47:05.254773 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:47:05.284757 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:47:05.284795 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:47:05.325211 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:47:05.325269 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:47:05.349361 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:47:05.349390 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:47:05.370562 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:47:05.370590 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:47:05.393677 1914687 logs.go:123] Gathering logs for kube-apiserver [2a39d895a9d4] ...
	I0804 09:47:05.393708 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a39d895a9d4"
	I0804 09:47:05.419293 1914687 logs.go:123] Gathering logs for kube-controller-manager [cf41671710f4] ...
	I0804 09:47:05.419321 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf41671710f4"
	I0804 09:47:07.941317 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:47:07.941762 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:47:07.941871 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:47:07.963513 1914687 logs.go:282] 2 containers: [2a39d895a9d4 3bf4e03f1d1e]
	I0804 09:47:07.963577 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:47:07.984775 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:47:07.984855 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:47:08.003290 1914687 logs.go:282] 0 containers: []
	W0804 09:47:08.003317 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:47:08.003374 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:47:08.024172 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:47:08.024235 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:47:08.044086 1914687 logs.go:282] 0 containers: []
	W0804 09:47:08.044105 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:47:08.044152 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:47:08.070966 1914687 logs.go:282] 2 containers: [cf41671710f4 4a93264af8b9]
	I0804 09:47:08.071069 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:47:08.090295 1914687 logs.go:282] 0 containers: []
	W0804 09:47:08.090328 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:47:08.090394 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:47:08.112970 1914687 logs.go:282] 0 containers: []
	W0804 09:47:08.112995 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:47:08.113008 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:47:08.113022 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:47:08.140524 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:47:08.140557 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:47:08.193102 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:47:08.193148 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:47:08.244399 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:47:08.244437 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:47:08.352421 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:47:08.352466 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:47:08.386667 1914687 logs.go:123] Gathering logs for kube-controller-manager [cf41671710f4] ...
	I0804 09:47:08.386713 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf41671710f4"
	I0804 09:47:08.415885 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:47:08.415918 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:47:08.448586 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:47:08.448625 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:47:08.477230 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:47:08.477290 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:47:08.508220 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:47:08.508264 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:47:08.596540 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:47:08.596567 1914687 logs.go:123] Gathering logs for kube-apiserver [2a39d895a9d4] ...
	I0804 09:47:08.596585 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a39d895a9d4"
	I0804 09:47:08.633787 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:47:08.633831 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:47:11.214891 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:47:11.215482 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:47:11.215595 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:47:11.238760 1914687 logs.go:282] 2 containers: [2a39d895a9d4 3bf4e03f1d1e]
	I0804 09:47:11.238853 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:47:11.257098 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:47:11.257167 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:47:11.276074 1914687 logs.go:282] 0 containers: []
	W0804 09:47:11.276109 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:47:11.276174 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:47:11.295272 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:47:11.295352 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:47:11.312998 1914687 logs.go:282] 0 containers: []
	W0804 09:47:11.313025 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:47:11.313071 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:47:11.331858 1914687 logs.go:282] 2 containers: [cf41671710f4 4a93264af8b9]
	I0804 09:47:11.331953 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:47:11.350559 1914687 logs.go:282] 0 containers: []
	W0804 09:47:11.350582 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:47:11.350626 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:47:11.368547 1914687 logs.go:282] 0 containers: []
	W0804 09:47:11.368578 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:47:11.368594 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:47:11.368611 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:47:11.419355 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:47:11.419389 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:47:11.446417 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:47:11.446451 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:47:11.468453 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:47:11.468482 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:47:11.511962 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:47:11.511989 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:47:11.537515 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:47:11.537548 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:47:11.590832 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:47:11.590865 1914687 logs.go:123] Gathering logs for kube-apiserver [2a39d895a9d4] ...
	I0804 09:47:11.590882 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a39d895a9d4"
	I0804 09:47:11.617291 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:47:11.617323 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:47:11.641191 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:47:11.641223 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:47:11.678659 1914687 logs.go:123] Gathering logs for kube-controller-manager [cf41671710f4] ...
	I0804 09:47:11.678691 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf41671710f4"
	I0804 09:47:11.699700 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:47:11.699728 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:47:11.724095 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:47:11.724124 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:47:14.297366 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:47:14.297871 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:47:14.297992 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:47:14.319436 1914687 logs.go:282] 2 containers: [2a39d895a9d4 3bf4e03f1d1e]
	I0804 09:47:14.319530 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:47:14.345061 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:47:14.345147 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:47:14.371057 1914687 logs.go:282] 0 containers: []
	W0804 09:47:14.371095 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:47:14.371168 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:47:14.397728 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:47:14.397810 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:47:14.423328 1914687 logs.go:282] 0 containers: []
	W0804 09:47:14.423360 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:47:14.423429 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:47:14.446888 1914687 logs.go:282] 2 containers: [cf41671710f4 4a93264af8b9]
	I0804 09:47:14.446976 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:47:14.467446 1914687 logs.go:282] 0 containers: []
	W0804 09:47:14.467477 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:47:14.467534 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:47:14.493048 1914687 logs.go:282] 0 containers: []
	W0804 09:47:14.493070 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:47:14.493081 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:47:14.493094 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:47:14.563358 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:47:14.563391 1914687 logs.go:123] Gathering logs for kube-apiserver [2a39d895a9d4] ...
	I0804 09:47:14.563405 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a39d895a9d4"
	I0804 09:47:14.589878 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:47:14.589911 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:47:14.627054 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:47:14.627086 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:47:14.652368 1914687 logs.go:123] Gathering logs for kube-controller-manager [cf41671710f4] ...
	I0804 09:47:14.652412 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf41671710f4"
	I0804 09:47:14.674334 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:47:14.674358 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:47:14.695165 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:47:14.695194 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:47:14.749353 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:47:14.749390 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:47:14.776086 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:47:14.776136 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:47:14.802633 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:47:14.802659 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:47:14.857837 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:47:14.857880 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:47:14.945827 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:47:14.945876 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:47:17.479037 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:47:17.479546 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:47:17.479649 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:47:17.507072 1914687 logs.go:282] 2 containers: [2a39d895a9d4 3bf4e03f1d1e]
	I0804 09:47:17.507150 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:47:17.532934 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:47:17.533005 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:47:17.554853 1914687 logs.go:282] 0 containers: []
	W0804 09:47:17.554883 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:47:17.554947 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:47:17.580907 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:47:17.580997 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:47:17.607153 1914687 logs.go:282] 0 containers: []
	W0804 09:47:17.607190 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:47:17.607248 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:47:17.631578 1914687 logs.go:282] 2 containers: [cf41671710f4 4a93264af8b9]
	I0804 09:47:17.631656 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:47:17.658644 1914687 logs.go:282] 0 containers: []
	W0804 09:47:17.658665 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:47:17.658706 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:47:17.682838 1914687 logs.go:282] 0 containers: []
	W0804 09:47:17.682865 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:47:17.682879 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:47:17.682897 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:47:17.717933 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:47:17.717975 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:47:17.766275 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:47:17.766302 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:47:17.884713 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:47:17.884758 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:47:17.964860 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:47:17.964893 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:47:17.994978 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:47:17.995025 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:47:18.045635 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:47:18.045671 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:47:18.075345 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:47:18.075375 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:47:18.102003 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:47:18.102050 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:47:18.128561 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:47:18.128594 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:47:18.202311 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:47:18.202338 1914687 logs.go:123] Gathering logs for kube-apiserver [2a39d895a9d4] ...
	I0804 09:47:18.202353 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a39d895a9d4"
	I0804 09:47:18.232882 1914687 logs.go:123] Gathering logs for kube-controller-manager [cf41671710f4] ...
	I0804 09:47:18.232913 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf41671710f4"
	I0804 09:47:20.761319 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:47:20.761832 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:47:20.762064 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:47:20.784456 1914687 logs.go:282] 2 containers: [2a39d895a9d4 3bf4e03f1d1e]
	I0804 09:47:20.784517 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:47:20.803863 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:47:20.803931 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:47:20.825425 1914687 logs.go:282] 0 containers: []
	W0804 09:47:20.825454 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:47:20.825511 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:47:20.844182 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:47:20.844245 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:47:20.861078 1914687 logs.go:282] 0 containers: []
	W0804 09:47:20.861105 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:47:20.861163 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:47:20.878326 1914687 logs.go:282] 2 containers: [cf41671710f4 4a93264af8b9]
	I0804 09:47:20.878390 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:47:20.896013 1914687 logs.go:282] 0 containers: []
	W0804 09:47:20.896040 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:47:20.896089 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:47:20.916232 1914687 logs.go:282] 0 containers: []
	W0804 09:47:20.916260 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:47:20.916275 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:47:20.916291 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:47:20.993383 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:47:20.993415 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:47:21.017153 1914687 logs.go:123] Gathering logs for kube-apiserver [2a39d895a9d4] ...
	I0804 09:47:21.017179 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a39d895a9d4"
	I0804 09:47:21.041500 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:47:21.041529 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:47:21.066055 1914687 logs.go:123] Gathering logs for kube-controller-manager [cf41671710f4] ...
	I0804 09:47:21.066092 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf41671710f4"
	I0804 09:47:21.089679 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:47:21.089702 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:47:21.115393 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:47:21.115417 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:47:21.159686 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:47:21.159717 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:47:21.225715 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:47:21.225739 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:47:21.225759 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:47:21.298329 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:47:21.298370 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:47:21.326777 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:47:21.326812 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:47:21.375549 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:47:21.375578 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:47:23.900450 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:47:23.901179 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:47:23.901320 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:47:23.940300 1914687 logs.go:282] 2 containers: [2a39d895a9d4 3bf4e03f1d1e]
	I0804 09:47:23.940405 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:47:23.967901 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:47:23.967997 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:47:23.992589 1914687 logs.go:282] 0 containers: []
	W0804 09:47:23.992612 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:47:23.992660 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:47:24.013439 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:47:24.013505 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:47:24.038823 1914687 logs.go:282] 0 containers: []
	W0804 09:47:24.038853 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:47:24.038913 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:47:24.070549 1914687 logs.go:282] 3 containers: [18cf2a173cad cf41671710f4 4a93264af8b9]
	I0804 09:47:24.070699 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:47:24.105443 1914687 logs.go:282] 0 containers: []
	W0804 09:47:24.105478 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:47:24.105537 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:47:24.131036 1914687 logs.go:282] 0 containers: []
	W0804 09:47:24.131066 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:47:24.131094 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:47:24.131111 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:47:24.164228 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:47:24.164263 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:47:24.300273 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:47:24.300332 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:47:24.395588 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:47:24.395613 1914687 logs.go:123] Gathering logs for kube-apiserver [2a39d895a9d4] ...
	I0804 09:47:24.395630 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a39d895a9d4"
	I0804 09:47:24.456956 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:47:24.456996 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:47:24.508471 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:47:24.508508 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:47:24.547234 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:47:24.547269 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:47:24.672970 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:47:24.673028 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:47:24.765211 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:47:24.765279 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:47:24.805865 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:47:24.805914 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:47:24.896246 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:47:24.896290 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:47:24.921497 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:47:24.921532 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:47:24.978501 1914687 logs.go:123] Gathering logs for kube-controller-manager [cf41671710f4] ...
	I0804 09:47:24.978555 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf41671710f4"
	I0804 09:47:27.516027 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:47:27.516612 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:47:27.516728 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:47:27.542709 1914687 logs.go:282] 2 containers: [2a39d895a9d4 3bf4e03f1d1e]
	I0804 09:47:27.542798 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:47:27.567186 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:47:27.567252 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:47:27.588583 1914687 logs.go:282] 0 containers: []
	W0804 09:47:27.588615 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:47:27.588669 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:47:27.609915 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:47:27.609980 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:47:27.628717 1914687 logs.go:282] 0 containers: []
	W0804 09:47:27.628744 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:47:27.628791 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:47:27.647580 1914687 logs.go:282] 3 containers: [18cf2a173cad cf41671710f4 4a93264af8b9]
	I0804 09:47:27.647658 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:47:27.665452 1914687 logs.go:282] 0 containers: []
	W0804 09:47:27.665475 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:47:27.665517 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:47:27.686793 1914687 logs.go:282] 0 containers: []
	W0804 09:47:27.686819 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:47:27.686835 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:47:27.686850 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:47:27.778026 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:47:27.778091 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:47:27.807541 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:47:27.807582 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:47:27.876365 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:47:27.876394 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:47:27.876412 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:47:27.930239 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:47:27.930278 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:47:27.980644 1914687 logs.go:123] Gathering logs for kube-controller-manager [cf41671710f4] ...
	I0804 09:47:27.980755 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf41671710f4"
	I0804 09:47:28.013146 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:47:28.013185 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:47:28.044703 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:47:28.044743 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:47:28.090220 1914687 logs.go:123] Gathering logs for kube-apiserver [2a39d895a9d4] ...
	I0804 09:47:28.090260 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a39d895a9d4"
	I0804 09:47:28.130226 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:47:28.130268 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:47:28.173005 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:47:28.173040 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:47:28.200855 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:47:28.200892 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:47:28.225404 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:47:28.225430 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:47:30.748046 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:47:30.748518 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:47:30.748669 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:47:30.768404 1914687 logs.go:282] 2 containers: [2a39d895a9d4 3bf4e03f1d1e]
	I0804 09:47:30.768477 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:47:30.786522 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:47:30.786591 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:47:30.804592 1914687 logs.go:282] 0 containers: []
	W0804 09:47:30.804614 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:47:30.804656 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:47:30.822717 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:47:30.822777 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:47:30.839854 1914687 logs.go:282] 0 containers: []
	W0804 09:47:30.839876 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:47:30.839917 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:47:30.858587 1914687 logs.go:282] 3 containers: [18cf2a173cad cf41671710f4 4a93264af8b9]
	I0804 09:47:30.858677 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:47:30.876265 1914687 logs.go:282] 0 containers: []
	W0804 09:47:30.876297 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:47:30.876356 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:47:30.894507 1914687 logs.go:282] 0 containers: []
	W0804 09:47:30.894537 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:47:30.894550 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:47:30.894567 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:47:30.918317 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:47:30.918349 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:47:30.940919 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:47:30.940951 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:47:30.978945 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:47:30.978974 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:47:31.056482 1914687 logs.go:123] Gathering logs for kube-apiserver [2a39d895a9d4] ...
	I0804 09:47:31.056518 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a39d895a9d4"
	I0804 09:47:31.084525 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:47:31.084557 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:47:31.133129 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:47:31.133165 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:47:31.156947 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:47:31.156976 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:47:31.179929 1914687 logs.go:123] Gathering logs for kube-controller-manager [cf41671710f4] ...
	I0804 09:47:31.179960 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf41671710f4"
	I0804 09:47:31.201408 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:47:31.201440 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:47:31.224953 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:47:31.224982 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:47:31.285632 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:47:31.285653 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:47:31.285671 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:47:31.325101 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:47:31.325134 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:47:33.847762 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:47:38.850376 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:47:38.850517 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:47:38.879180 1914687 logs.go:282] 3 containers: [419e073f26de 2a39d895a9d4 3bf4e03f1d1e]
	I0804 09:47:38.879553 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:47:38.918304 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:47:38.918405 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:47:38.945751 1914687 logs.go:282] 0 containers: []
	W0804 09:47:38.945777 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:47:38.945869 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:47:38.971406 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:47:38.971505 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:47:39.003422 1914687 logs.go:282] 0 containers: []
	W0804 09:47:39.003451 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:47:39.003509 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:47:39.039123 1914687 logs.go:282] 3 containers: [18cf2a173cad cf41671710f4 4a93264af8b9]
	I0804 09:47:39.039217 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:47:39.085390 1914687 logs.go:282] 0 containers: []
	W0804 09:47:39.085420 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:47:39.085476 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:47:39.131255 1914687 logs.go:282] 0 containers: []
	W0804 09:47:39.131283 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:47:39.131297 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:47:39.131321 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:47:39.273058 1914687 logs.go:123] Gathering logs for kube-apiserver [419e073f26de] ...
	I0804 09:47:39.273158 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419e073f26de"
	I0804 09:47:39.337613 1914687 logs.go:123] Gathering logs for kube-apiserver [2a39d895a9d4] ...
	I0804 09:47:39.337705 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a39d895a9d4"
	I0804 09:47:39.371146 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:47:39.371229 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:47:39.435854 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:47:39.435974 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:47:39.467188 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:47:39.467216 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:47:39.498808 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:47:39.498901 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:47:39.537909 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:47:39.537949 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 09:47:49.677086 1914687 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.139111423s)
	W0804 09:47:49.677139 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0804 09:47:49.677149 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:47:49.677164 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:47:49.732145 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:47:49.732181 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:47:49.759903 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:47:49.759949 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:47:49.815473 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:47:49.815522 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:47:49.844393 1914687 logs.go:123] Gathering logs for kube-controller-manager [cf41671710f4] ...
	I0804 09:47:49.844424 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf41671710f4"
	I0804 09:47:49.866923 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:47:49.866949 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:47:52.416310 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:47:52.416748 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:47:52.416846 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:47:52.437194 1914687 logs.go:282] 3 containers: [419e073f26de 2a39d895a9d4 3bf4e03f1d1e]
	I0804 09:47:52.437292 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:47:52.457081 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:47:52.457155 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:47:52.475438 1914687 logs.go:282] 0 containers: []
	W0804 09:47:52.475462 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:47:52.475511 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:47:52.494476 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:47:52.494562 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:47:52.513848 1914687 logs.go:282] 0 containers: []
	W0804 09:47:52.513875 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:47:52.513924 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:47:52.538823 1914687 logs.go:282] 3 containers: [18cf2a173cad cf41671710f4 4a93264af8b9]
	I0804 09:47:52.538887 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:47:52.560543 1914687 logs.go:282] 0 containers: []
	W0804 09:47:52.560566 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:47:52.560614 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:47:52.580702 1914687 logs.go:282] 0 containers: []
	W0804 09:47:52.580729 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:47:52.580742 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:47:52.580758 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:47:52.640776 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:47:52.640799 1914687 logs.go:123] Gathering logs for kube-apiserver [419e073f26de] ...
	I0804 09:47:52.640810 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419e073f26de"
	I0804 09:47:53.641744 1914687 logs.go:123] Gathering logs for kube-apiserver [2a39d895a9d4] ...
	I0804 09:47:53.641772 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a39d895a9d4"
	I0804 09:47:53.681069 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:47:53.681111 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:47:53.743261 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:47:53.743360 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:47:53.780242 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:47:53.780286 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:47:53.806381 1914687 logs.go:123] Gathering logs for kube-controller-manager [cf41671710f4] ...
	I0804 09:47:53.806493 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf41671710f4"
	I0804 09:47:53.832806 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:47:53.832828 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:47:53.860375 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:47:53.860416 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:47:53.971360 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:47:53.971401 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:47:53.999445 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:47:53.999474 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:47:54.033775 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:47:54.033804 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:47:54.093074 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:47:54.093157 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:47:54.128602 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:47:54.128638 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:47:56.693334 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:47:56.693727 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:47:56.693819 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:47:56.720907 1914687 logs.go:282] 2 containers: [419e073f26de 3bf4e03f1d1e]
	I0804 09:47:56.720973 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:47:56.747252 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:47:56.747341 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:47:56.770399 1914687 logs.go:282] 0 containers: []
	W0804 09:47:56.770430 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:47:56.770490 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:47:56.794288 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:47:56.794350 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:47:56.815141 1914687 logs.go:282] 0 containers: []
	W0804 09:47:56.815166 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:47:56.815210 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:47:56.834387 1914687 logs.go:282] 2 containers: [18cf2a173cad 4a93264af8b9]
	I0804 09:47:56.834483 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:47:56.859695 1914687 logs.go:282] 0 containers: []
	W0804 09:47:56.859724 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:47:56.859782 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:47:56.879073 1914687 logs.go:282] 0 containers: []
	W0804 09:47:56.879106 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:47:56.879122 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:47:56.879136 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:47:56.927283 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:47:56.927322 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:47:56.969590 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:47:56.969629 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:47:56.996641 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:47:56.996675 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:47:57.019532 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:47:57.019567 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:47:57.043795 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:47:57.043827 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:47:57.104595 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:47:57.104658 1914687 logs.go:123] Gathering logs for kube-apiserver [419e073f26de] ...
	I0804 09:47:57.104708 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419e073f26de"
	I0804 09:47:57.133562 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:47:57.133605 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:47:57.158759 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:47:57.158790 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:47:57.185904 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:47:57.185932 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:47:57.208758 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:47:57.208796 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:47:57.252240 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:47:57.252286 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:47:59.834570 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:47:59.835015 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:47:59.835120 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:47:59.853937 1914687 logs.go:282] 2 containers: [419e073f26de 3bf4e03f1d1e]
	I0804 09:47:59.853990 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:47:59.877127 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:47:59.877200 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:47:59.900570 1914687 logs.go:282] 0 containers: []
	W0804 09:47:59.900596 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:47:59.900652 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:47:59.924755 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:47:59.924832 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:47:59.951633 1914687 logs.go:282] 0 containers: []
	W0804 09:47:59.951656 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:47:59.951705 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:47:59.981803 1914687 logs.go:282] 2 containers: [18cf2a173cad 4a93264af8b9]
	I0804 09:47:59.981888 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:48:00.005785 1914687 logs.go:282] 0 containers: []
	W0804 09:48:00.005815 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:48:00.005865 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:48:00.028554 1914687 logs.go:282] 0 containers: []
	W0804 09:48:00.028587 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:48:00.028606 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:48:00.028626 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:48:00.053621 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:48:00.053654 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:48:00.119090 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:48:00.119131 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:48:00.143726 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:48:00.143753 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:48:00.207704 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:48:00.207743 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:48:00.233152 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:48:00.233182 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:48:00.331523 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:48:00.331556 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:48:00.398583 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:48:00.398603 1914687 logs.go:123] Gathering logs for kube-apiserver [419e073f26de] ...
	I0804 09:48:00.398617 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419e073f26de"
	I0804 09:48:00.433995 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:48:00.434049 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:48:00.455894 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:48:00.455923 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:48:00.484313 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:48:00.484344 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:48:00.508902 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:48:00.508931 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:48:03.057312 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:48:03.057687 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:48:03.057775 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:48:03.082375 1914687 logs.go:282] 2 containers: [419e073f26de 3bf4e03f1d1e]
	I0804 09:48:03.082451 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:48:03.108326 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:48:03.108405 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:48:03.130230 1914687 logs.go:282] 0 containers: []
	W0804 09:48:03.130255 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:48:03.130308 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:48:03.161719 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:48:03.161802 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:48:03.183050 1914687 logs.go:282] 0 containers: []
	W0804 09:48:03.183077 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:48:03.183142 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:48:03.206148 1914687 logs.go:282] 2 containers: [18cf2a173cad 4a93264af8b9]
	I0804 09:48:03.206226 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:48:03.229002 1914687 logs.go:282] 0 containers: []
	W0804 09:48:03.229030 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:48:03.229071 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:48:03.252512 1914687 logs.go:282] 0 containers: []
	W0804 09:48:03.252535 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:48:03.252547 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:48:03.252558 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:48:03.360147 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:48:03.360196 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:48:03.389622 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:48:03.389663 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:48:03.453849 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:48:03.453869 1914687 logs.go:123] Gathering logs for kube-apiserver [419e073f26de] ...
	I0804 09:48:03.453884 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419e073f26de"
	I0804 09:48:03.482211 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:48:03.482253 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:48:03.507730 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:48:03.507759 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:48:03.534805 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:48:03.534848 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:48:03.564127 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:48:03.564182 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:48:03.588115 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:48:03.588160 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:48:03.637727 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:48:03.637766 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:48:03.679335 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:48:03.679369 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:48:03.699505 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:48:03.699534 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:48:06.238757 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:48:06.239211 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:48:06.239311 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:48:06.264877 1914687 logs.go:282] 2 containers: [419e073f26de 3bf4e03f1d1e]
	I0804 09:48:06.264953 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:48:06.284833 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:48:06.284909 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:48:06.305276 1914687 logs.go:282] 0 containers: []
	W0804 09:48:06.305307 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:48:06.305360 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:48:06.329120 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:48:06.329193 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:48:06.355616 1914687 logs.go:282] 0 containers: []
	W0804 09:48:06.355646 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:48:06.355696 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:48:06.379210 1914687 logs.go:282] 2 containers: [18cf2a173cad 4a93264af8b9]
	I0804 09:48:06.379277 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:48:06.400372 1914687 logs.go:282] 0 containers: []
	W0804 09:48:06.400398 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:48:06.400444 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:48:06.420365 1914687 logs.go:282] 0 containers: []
	W0804 09:48:06.420389 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:48:06.420401 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:48:06.420414 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:48:06.492227 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:48:06.492248 1914687 logs.go:123] Gathering logs for kube-apiserver [419e073f26de] ...
	I0804 09:48:06.492267 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419e073f26de"
	I0804 09:48:06.529480 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:48:06.529507 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:48:06.556895 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:48:06.556926 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:48:06.605685 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:48:06.605713 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:48:06.636988 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:48:06.637028 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:48:06.663211 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:48:06.663248 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:48:06.743390 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:48:06.743433 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:48:06.770977 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:48:06.771015 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:48:06.795139 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:48:06.795171 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:48:06.842728 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:48:06.842758 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:48:06.936107 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:48:06.936140 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:48:09.469142 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:48:09.469606 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:48:09.469698 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:48:09.510162 1914687 logs.go:282] 2 containers: [419e073f26de 3bf4e03f1d1e]
	I0804 09:48:09.510231 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:48:09.534730 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:48:09.534795 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:48:09.553422 1914687 logs.go:282] 0 containers: []
	W0804 09:48:09.553445 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:48:09.553492 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:48:09.576120 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:48:09.576203 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:48:09.597321 1914687 logs.go:282] 0 containers: []
	W0804 09:48:09.597353 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:48:09.597412 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:48:09.621585 1914687 logs.go:282] 2 containers: [18cf2a173cad 4a93264af8b9]
	I0804 09:48:09.621677 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:48:09.642786 1914687 logs.go:282] 0 containers: []
	W0804 09:48:09.642808 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:48:09.642853 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:48:09.660600 1914687 logs.go:282] 0 containers: []
	W0804 09:48:09.660619 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:48:09.660630 1914687 logs.go:123] Gathering logs for kube-apiserver [419e073f26de] ...
	I0804 09:48:09.660641 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419e073f26de"
	I0804 09:48:09.701948 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:48:09.701989 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:48:09.732774 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:48:09.732807 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:48:09.755339 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:48:09.755370 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:48:09.805539 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:48:09.805571 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:48:09.830323 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:48:09.830349 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:48:09.889639 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:48:09.889682 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:48:09.921002 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:48:09.921031 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:48:09.970289 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:48:09.970324 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:48:09.995595 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:48:09.995626 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:48:10.018330 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:48:10.018360 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:48:10.120159 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:48:10.120203 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:48:10.197280 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:48:12.698402 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:48:12.698880 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:48:12.698970 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:48:12.720239 1914687 logs.go:282] 2 containers: [419e073f26de 3bf4e03f1d1e]
	I0804 09:48:12.720307 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:48:12.737655 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:48:12.737716 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:48:12.755416 1914687 logs.go:282] 0 containers: []
	W0804 09:48:12.755439 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:48:12.755487 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:48:12.773904 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:48:12.773986 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:48:12.797908 1914687 logs.go:282] 0 containers: []
	W0804 09:48:12.797931 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:48:12.797981 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:48:12.818505 1914687 logs.go:282] 2 containers: [18cf2a173cad 4a93264af8b9]
	I0804 09:48:12.818588 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:48:12.836421 1914687 logs.go:282] 0 containers: []
	W0804 09:48:12.836445 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:48:12.836501 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:48:12.853138 1914687 logs.go:282] 0 containers: []
	W0804 09:48:12.853168 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:48:12.853184 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:48:12.853199 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:48:12.925861 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:48:12.925887 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:48:12.925905 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:48:12.975964 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:48:12.975996 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:48:13.002309 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:48:13.002337 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:48:13.029871 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:48:13.029902 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:48:13.052412 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:48:13.052440 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:48:13.105362 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:48:13.105393 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:48:13.133888 1914687 logs.go:123] Gathering logs for kube-apiserver [419e073f26de] ...
	I0804 09:48:13.133918 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419e073f26de"
	I0804 09:48:13.159318 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:48:13.159348 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:48:13.222285 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:48:13.222369 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:48:13.248874 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:48:13.248962 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:48:13.275961 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:48:13.275993 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:48:15.882382 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:48:15.882828 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:48:15.882931 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:48:15.901799 1914687 logs.go:282] 2 containers: [419e073f26de 3bf4e03f1d1e]
	I0804 09:48:15.901870 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:48:15.920160 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:48:15.920250 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:48:15.937353 1914687 logs.go:282] 0 containers: []
	W0804 09:48:15.937384 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:48:15.937432 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:48:15.954827 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:48:15.954917 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:48:15.971868 1914687 logs.go:282] 0 containers: []
	W0804 09:48:15.971896 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:48:15.971958 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:48:15.988919 1914687 logs.go:282] 2 containers: [18cf2a173cad 4a93264af8b9]
	I0804 09:48:15.988990 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:48:16.006770 1914687 logs.go:282] 0 containers: []
	W0804 09:48:16.006797 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:48:16.006849 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:48:16.024612 1914687 logs.go:282] 0 containers: []
	W0804 09:48:16.024640 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:48:16.024654 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:48:16.024666 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:48:16.051316 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:48:16.051353 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:48:16.100953 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:48:16.101020 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:48:16.228145 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:48:16.228190 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:48:16.252465 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:48:16.252493 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:48:16.316471 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:48:16.316523 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:48:16.342943 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:48:16.342984 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:48:16.368068 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:48:16.368117 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:48:16.414258 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:48:16.414293 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:48:16.491178 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:48:16.491208 1914687 logs.go:123] Gathering logs for kube-apiserver [419e073f26de] ...
	I0804 09:48:16.491220 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419e073f26de"
	I0804 09:48:16.528017 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:48:16.528055 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:48:16.592900 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:48:16.592940 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:48:19.129355 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:48:19.129700 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:48:19.129774 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:48:19.148562 1914687 logs.go:282] 2 containers: [419e073f26de 3bf4e03f1d1e]
	I0804 09:48:19.148643 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:48:19.170578 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:48:19.170673 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:48:19.191369 1914687 logs.go:282] 0 containers: []
	W0804 09:48:19.191401 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:48:19.191461 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:48:19.231483 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:48:19.231572 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:48:19.250780 1914687 logs.go:282] 0 containers: []
	W0804 09:48:19.250805 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:48:19.250861 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:48:19.283723 1914687 logs.go:282] 2 containers: [18cf2a173cad 4a93264af8b9]
	I0804 09:48:19.283813 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:48:19.310968 1914687 logs.go:282] 0 containers: []
	W0804 09:48:19.310996 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:48:19.311056 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:48:19.334344 1914687 logs.go:282] 0 containers: []
	W0804 09:48:19.334373 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:48:19.334396 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:48:19.334410 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:48:19.361723 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:48:19.361891 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:48:19.390780 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:48:19.390818 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:48:19.440089 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:48:19.440117 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:48:19.546361 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:48:19.546402 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:48:19.574764 1914687 logs.go:123] Gathering logs for kube-apiserver [419e073f26de] ...
	I0804 09:48:19.574797 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419e073f26de"
	I0804 09:48:19.604695 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:48:19.604725 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:48:19.633609 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:48:19.633647 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:48:19.659277 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:48:19.659309 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:48:19.729898 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:48:19.729921 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:48:19.729939 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:48:19.801437 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:48:19.801473 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:48:19.843539 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:48:19.843568 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:48:22.364960 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:48:22.365377 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:48:22.365479 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:48:22.386681 1914687 logs.go:282] 2 containers: [419e073f26de 3bf4e03f1d1e]
	I0804 09:48:22.386766 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:48:22.406573 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:48:22.406653 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:48:22.428897 1914687 logs.go:282] 0 containers: []
	W0804 09:48:22.428928 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:48:22.428996 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:48:22.450780 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:48:22.450843 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:48:22.470508 1914687 logs.go:282] 0 containers: []
	W0804 09:48:22.470534 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:48:22.470587 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:48:22.492090 1914687 logs.go:282] 2 containers: [18cf2a173cad 4a93264af8b9]
	I0804 09:48:22.492156 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:48:22.510662 1914687 logs.go:282] 0 containers: []
	W0804 09:48:22.510685 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:48:22.510729 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:48:22.531389 1914687 logs.go:282] 0 containers: []
	W0804 09:48:22.531414 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:48:22.531428 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:48:22.531442 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:48:22.636226 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:48:22.636307 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:48:22.661808 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:48:22.661843 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:48:22.692426 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:48:22.692464 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:48:22.731965 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:48:22.732009 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:48:22.755305 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:48:22.755338 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:48:22.786620 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:48:22.786659 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:48:22.884693 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:48:22.884725 1914687 logs.go:123] Gathering logs for kube-apiserver [419e073f26de] ...
	I0804 09:48:22.884738 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419e073f26de"
	I0804 09:48:22.928697 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:48:22.928732 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:48:22.993758 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:48:22.993811 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:48:23.025614 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:48:23.025652 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:48:23.070731 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:48:23.070767 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:48:25.620716 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:48:25.621219 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:48:25.621356 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:48:25.641335 1914687 logs.go:282] 2 containers: [419e073f26de 3bf4e03f1d1e]
	I0804 09:48:25.641412 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:48:25.661867 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:48:25.661949 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:48:25.679670 1914687 logs.go:282] 0 containers: []
	W0804 09:48:25.679701 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:48:25.679759 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:48:25.698417 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:48:25.698492 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:48:25.716553 1914687 logs.go:282] 0 containers: []
	W0804 09:48:25.716581 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:48:25.716639 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:48:25.735669 1914687 logs.go:282] 2 containers: [18cf2a173cad 4a93264af8b9]
	I0804 09:48:25.735762 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:48:25.756928 1914687 logs.go:282] 0 containers: []
	W0804 09:48:25.756955 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:48:25.757015 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:48:25.779358 1914687 logs.go:282] 0 containers: []
	W0804 09:48:25.779390 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:48:25.779405 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:48:25.779422 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:48:25.807554 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:48:25.807600 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:48:25.865327 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:48:25.865352 1914687 logs.go:123] Gathering logs for kube-apiserver [419e073f26de] ...
	I0804 09:48:25.865366 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419e073f26de"
	I0804 09:48:25.896213 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:48:25.896252 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:48:25.945052 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:48:25.945085 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:48:25.970288 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:48:25.970325 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:48:25.996455 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:48:25.996493 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:48:26.022241 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:48:26.022270 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:48:26.059549 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:48:26.059576 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:48:26.156371 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:48:26.156413 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:48:26.196374 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:48:26.196407 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:48:26.218576 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:48:26.218605 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:48:28.741865 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:48:28.742319 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:48:28.742410 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:48:28.762125 1914687 logs.go:282] 2 containers: [419e073f26de 3bf4e03f1d1e]
	I0804 09:48:28.762208 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:48:28.780804 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:48:28.780878 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:48:28.801177 1914687 logs.go:282] 0 containers: []
	W0804 09:48:28.801205 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:48:28.801270 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:48:28.826411 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:48:28.826488 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:48:28.844779 1914687 logs.go:282] 0 containers: []
	W0804 09:48:28.844801 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:48:28.844856 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:48:28.863385 1914687 logs.go:282] 2 containers: [18cf2a173cad 4a93264af8b9]
	I0804 09:48:28.863480 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:48:28.882151 1914687 logs.go:282] 0 containers: []
	W0804 09:48:28.882186 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:48:28.882244 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:48:28.899933 1914687 logs.go:282] 0 containers: []
	W0804 09:48:28.899963 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:48:28.899976 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:48:28.899987 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:48:28.922551 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:48:28.922579 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:48:28.964784 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:48:28.964818 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:48:28.991781 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:48:28.991811 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:48:29.048744 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:48:29.048772 1914687 logs.go:123] Gathering logs for kube-apiserver [419e073f26de] ...
	I0804 09:48:29.048791 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419e073f26de"
	I0804 09:48:29.076902 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:48:29.076934 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:48:29.126651 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:48:29.126684 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:48:29.150866 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:48:29.150898 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:48:29.175720 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:48:29.175750 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:48:29.199988 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:48:29.200025 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:48:29.222907 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:48:29.222940 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:48:29.310922 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:48:29.310963 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:48:31.852597 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:48:31.853024 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:48:31.853110 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:48:31.881773 1914687 logs.go:282] 2 containers: [419e073f26de 3bf4e03f1d1e]
	I0804 09:48:31.881842 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:48:31.909367 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:48:31.909433 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:48:31.929020 1914687 logs.go:282] 0 containers: []
	W0804 09:48:31.929050 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:48:31.929107 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:48:31.947122 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:48:31.947206 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:48:31.965749 1914687 logs.go:282] 0 containers: []
	W0804 09:48:31.965774 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:48:31.965836 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:48:31.988439 1914687 logs.go:282] 2 containers: [18cf2a173cad 4a93264af8b9]
	I0804 09:48:31.988525 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:48:32.011756 1914687 logs.go:282] 0 containers: []
	W0804 09:48:32.011780 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:48:32.011838 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:48:32.029919 1914687 logs.go:282] 0 containers: []
	W0804 09:48:32.029944 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:48:32.029957 1914687 logs.go:123] Gathering logs for kube-apiserver [419e073f26de] ...
	I0804 09:48:32.029972 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419e073f26de"
	I0804 09:48:32.056912 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:48:32.056942 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:48:32.111074 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:48:32.111120 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:48:32.137767 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:48:32.137799 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:48:32.162250 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:48:32.162283 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:48:32.202514 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:48:32.202543 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:48:32.229064 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:48:32.229106 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:48:32.277027 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:48:32.277060 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:48:32.304478 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:48:32.304504 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:48:32.331132 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:48:32.331165 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:48:32.358888 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:48:32.358916 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:48:32.444488 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:48:32.444528 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:48:32.506113 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:48:35.006426 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:48:35.006964 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:48:35.007079 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:48:35.032049 1914687 logs.go:282] 2 containers: [419e073f26de 3bf4e03f1d1e]
	I0804 09:48:35.032130 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:48:35.057507 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:48:35.057580 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:48:35.085743 1914687 logs.go:282] 0 containers: []
	W0804 09:48:35.085775 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:48:35.085829 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:48:35.109098 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:48:35.109163 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:48:35.130424 1914687 logs.go:282] 0 containers: []
	W0804 09:48:35.130449 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:48:35.130497 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:48:35.152194 1914687 logs.go:282] 2 containers: [18cf2a173cad 4a93264af8b9]
	I0804 09:48:35.152280 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:48:35.175476 1914687 logs.go:282] 0 containers: []
	W0804 09:48:35.175502 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:48:35.175553 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:48:35.195978 1914687 logs.go:282] 0 containers: []
	W0804 09:48:35.196003 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:48:35.196017 1914687 logs.go:123] Gathering logs for kube-apiserver [419e073f26de] ...
	I0804 09:48:35.196030 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419e073f26de"
	I0804 09:48:35.227209 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:48:35.227238 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:48:35.278196 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:48:35.278238 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:48:35.327434 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:48:35.327478 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:48:35.354929 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:48:35.354961 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:48:35.381194 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:48:35.381227 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:48:35.469885 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:48:35.469922 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:48:35.535270 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:48:35.535295 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:48:35.535310 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:48:35.562754 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:48:35.562780 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:48:35.595599 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:48:35.595624 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:48:35.618825 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:48:35.618858 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:48:35.657224 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:48:35.657265 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:48:38.187632 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:48:43.187879 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 09:48:43.187979 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:48:43.210823 1914687 logs.go:282] 3 containers: [a25521cd2e4b 419e073f26de 3bf4e03f1d1e]
	I0804 09:48:43.210907 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:48:43.235921 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:48:43.236012 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:48:43.258764 1914687 logs.go:282] 0 containers: []
	W0804 09:48:43.258795 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:48:43.258856 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:48:43.283299 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:48:43.283375 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:48:43.305388 1914687 logs.go:282] 0 containers: []
	W0804 09:48:43.305417 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:48:43.305472 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:48:43.327266 1914687 logs.go:282] 3 containers: [bea2b4d6ce5d 18cf2a173cad 4a93264af8b9]
	I0804 09:48:43.327348 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:48:43.347796 1914687 logs.go:282] 0 containers: []
	W0804 09:48:43.347827 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:48:43.347879 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:48:43.370592 1914687 logs.go:282] 0 containers: []
	W0804 09:48:43.370614 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:48:43.370627 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:48:43.370638 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:48:43.425750 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:48:43.425787 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:48:43.473789 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:48:43.473825 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:48:43.502875 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:48:43.502905 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:48:43.527982 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:48:43.528012 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 09:48:53.588476 1914687 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.060437583s)
	W0804 09:48:53.588521 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0804 09:48:53.588531 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:48:53.588551 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:48:53.613917 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:48:53.613951 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:48:53.641828 1914687 logs.go:123] Gathering logs for kube-controller-manager [bea2b4d6ce5d] ...
	I0804 09:48:53.641857 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bea2b4d6ce5d"
	I0804 09:48:53.664182 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:48:53.664219 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:48:53.693683 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:48:53.693719 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:48:53.745705 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:48:53.745729 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:48:53.834419 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:48:53.834459 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:48:53.858244 1914687 logs.go:123] Gathering logs for kube-apiserver [a25521cd2e4b] ...
	I0804 09:48:53.858277 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25521cd2e4b"
	I0804 09:48:53.884891 1914687 logs.go:123] Gathering logs for kube-apiserver [419e073f26de] ...
	I0804 09:48:53.884925 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419e073f26de"
	I0804 09:48:56.413992 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:48:58.208314 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:56462->192.168.85.2:8443: read: connection reset by peer
	I0804 09:48:58.208459 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:48:58.233749 1914687 logs.go:282] 3 containers: [a25521cd2e4b 419e073f26de 3bf4e03f1d1e]
	I0804 09:48:58.233832 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:48:58.258004 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:48:58.258078 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:48:58.279844 1914687 logs.go:282] 0 containers: []
	W0804 09:48:58.279873 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:48:58.279931 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:48:58.302493 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:48:58.302594 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:48:58.323748 1914687 logs.go:282] 0 containers: []
	W0804 09:48:58.323772 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:48:58.323823 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:48:58.344405 1914687 logs.go:282] 3 containers: [bea2b4d6ce5d 18cf2a173cad 4a93264af8b9]
	I0804 09:48:58.344494 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:48:58.366034 1914687 logs.go:282] 0 containers: []
	W0804 09:48:58.366061 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:48:58.366117 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:48:58.385355 1914687 logs.go:282] 0 containers: []
	W0804 09:48:58.385385 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:48:58.385407 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:48:58.385423 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:48:58.413354 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:48:58.413397 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:48:58.474990 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:48:58.475014 1914687 logs.go:123] Gathering logs for kube-apiserver [419e073f26de] ...
	I0804 09:48:58.475027 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419e073f26de"
	I0804 09:48:58.508840 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:48:58.508888 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:48:58.564679 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:48:58.564716 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:48:58.592156 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:48:58.592188 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:48:58.618492 1914687 logs.go:123] Gathering logs for kube-controller-manager [bea2b4d6ce5d] ...
	I0804 09:48:58.618525 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bea2b4d6ce5d"
	I0804 09:48:58.643635 1914687 logs.go:123] Gathering logs for kube-apiserver [a25521cd2e4b] ...
	I0804 09:48:58.643678 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25521cd2e4b"
	I0804 09:49:00.341461 1914687 ssh_runner.go:235] Completed: /bin/bash -c "docker logs --tail 400 a25521cd2e4b": (1.697752366s)
	I0804 09:49:00.347793 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:49:00.347826 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:49:00.409167 1914687 logs.go:123] Gathering logs for kube-controller-manager [18cf2a173cad] ...
	I0804 09:49:00.409203 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cf2a173cad"
	I0804 09:49:00.435075 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:49:00.435108 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:49:00.464653 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:49:00.464688 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:49:00.487041 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:49:00.487074 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:49:00.547650 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:49:00.547676 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:49:03.145316 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:49:03.145686 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:49:03.145781 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:49:03.167225 1914687 logs.go:282] 2 containers: [a25521cd2e4b 3bf4e03f1d1e]
	I0804 09:49:03.167308 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:49:03.197628 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:49:03.197701 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:49:03.215679 1914687 logs.go:282] 0 containers: []
	W0804 09:49:03.215711 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:49:03.215769 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:49:03.235699 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:49:03.235801 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:49:03.255071 1914687 logs.go:282] 0 containers: []
	W0804 09:49:03.255091 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:49:03.255135 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:49:03.291078 1914687 logs.go:282] 2 containers: [bea2b4d6ce5d 4a93264af8b9]
	I0804 09:49:03.291161 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:49:03.311096 1914687 logs.go:282] 0 containers: []
	W0804 09:49:03.311124 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:49:03.311184 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:49:03.328031 1914687 logs.go:282] 0 containers: []
	W0804 09:49:03.328061 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:49:03.328076 1914687 logs.go:123] Gathering logs for kube-controller-manager [bea2b4d6ce5d] ...
	I0804 09:49:03.328093 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bea2b4d6ce5d"
	I0804 09:49:03.349057 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:49:03.349081 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:49:03.397986 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:49:03.398017 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:49:03.493702 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:49:03.493787 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:49:03.519030 1914687 logs.go:123] Gathering logs for kube-apiserver [a25521cd2e4b] ...
	I0804 09:49:03.519060 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25521cd2e4b"
	I0804 09:49:03.549332 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:49:03.549427 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:49:03.625208 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:49:03.625257 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:49:03.648414 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:49:03.648443 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:49:03.676297 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:49:03.676456 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:49:03.723085 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:49:03.723112 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:49:03.803778 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:49:03.803803 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:49:03.803820 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:49:03.828207 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:49:03.828238 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:49:06.385091 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:49:06.385536 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:49:06.385643 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:49:06.404595 1914687 logs.go:282] 2 containers: [a25521cd2e4b 3bf4e03f1d1e]
	I0804 09:49:06.404663 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:49:06.423116 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:49:06.423182 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:49:06.440344 1914687 logs.go:282] 0 containers: []
	W0804 09:49:06.440370 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:49:06.440418 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:49:06.457721 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:49:06.457797 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:49:06.474489 1914687 logs.go:282] 0 containers: []
	W0804 09:49:06.474513 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:49:06.474565 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:49:06.491793 1914687 logs.go:282] 2 containers: [bea2b4d6ce5d 4a93264af8b9]
	I0804 09:49:06.491858 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:49:06.509947 1914687 logs.go:282] 0 containers: []
	W0804 09:49:06.509969 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:49:06.510014 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:49:06.527533 1914687 logs.go:282] 0 containers: []
	W0804 09:49:06.527557 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:49:06.527569 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:49:06.527584 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:49:06.623491 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:49:06.623527 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:49:06.675634 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:49:06.675656 1914687 logs.go:123] Gathering logs for kube-apiserver [a25521cd2e4b] ...
	I0804 09:49:06.675686 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25521cd2e4b"
	I0804 09:49:06.701398 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:49:06.701427 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:49:06.747000 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:49:06.747034 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:49:06.769247 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:49:06.769279 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:49:06.805468 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:49:06.805496 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:49:06.828996 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:49:06.829028 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:49:06.850745 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:49:06.850769 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:49:06.896644 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:49:06.896673 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:49:06.919350 1914687 logs.go:123] Gathering logs for kube-controller-manager [bea2b4d6ce5d] ...
	I0804 09:49:06.919378 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bea2b4d6ce5d"
	I0804 09:49:06.939395 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:49:06.939424 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:49:09.462668 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:49:09.463060 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:49:09.463152 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:49:09.482601 1914687 logs.go:282] 2 containers: [a25521cd2e4b 3bf4e03f1d1e]
	I0804 09:49:09.482677 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:49:09.503218 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:49:09.503283 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:49:09.523297 1914687 logs.go:282] 0 containers: []
	W0804 09:49:09.523320 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:49:09.523371 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:49:09.542656 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:49:09.542720 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:49:09.562388 1914687 logs.go:282] 0 containers: []
	W0804 09:49:09.562410 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:49:09.562465 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:49:09.582663 1914687 logs.go:282] 2 containers: [bea2b4d6ce5d 4a93264af8b9]
	I0804 09:49:09.582747 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:49:09.602669 1914687 logs.go:282] 0 containers: []
	W0804 09:49:09.602693 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:49:09.602749 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:49:09.621142 1914687 logs.go:282] 0 containers: []
	W0804 09:49:09.621169 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:49:09.621184 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:49:09.621198 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:49:09.712863 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:49:09.712893 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:49:09.738227 1914687 logs.go:123] Gathering logs for kube-apiserver [a25521cd2e4b] ...
	I0804 09:49:09.738257 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25521cd2e4b"
	I0804 09:49:09.766098 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:49:09.766122 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:49:09.813612 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:49:09.813639 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:49:09.837310 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:49:09.837334 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:49:09.884173 1914687 logs.go:123] Gathering logs for kube-controller-manager [bea2b4d6ce5d] ...
	I0804 09:49:09.884199 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bea2b4d6ce5d"
	I0804 09:49:09.907665 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:49:09.907691 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:49:09.934061 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:49:09.934089 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:49:09.992664 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:49:09.992689 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:49:09.992706 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:49:10.018050 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:49:10.018133 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:49:10.039451 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:49:10.039477 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:49:12.577156 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:49:12.577632 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:49:12.577718 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:49:12.598208 1914687 logs.go:282] 2 containers: [a25521cd2e4b 3bf4e03f1d1e]
	I0804 09:49:12.598295 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:49:12.616834 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:49:12.616917 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:49:12.634870 1914687 logs.go:282] 0 containers: []
	W0804 09:49:12.634897 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:49:12.634956 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:49:12.652601 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:49:12.652667 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:49:12.669503 1914687 logs.go:282] 0 containers: []
	W0804 09:49:12.669531 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:49:12.669590 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:49:12.687260 1914687 logs.go:282] 2 containers: [bea2b4d6ce5d 4a93264af8b9]
	I0804 09:49:12.687340 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:49:12.704279 1914687 logs.go:282] 0 containers: []
	W0804 09:49:12.704308 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:49:12.704358 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:49:12.721680 1914687 logs.go:282] 0 containers: []
	W0804 09:49:12.721704 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:49:12.721718 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:49:12.721734 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:49:12.776881 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:49:12.776917 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:49:12.802434 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:49:12.802465 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:49:12.826249 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:49:12.826276 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:49:12.864420 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:49:12.864445 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:49:12.952036 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:49:12.952078 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:49:13.008056 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:49:13.008077 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:49:13.008091 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:49:13.055926 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:49:13.055962 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:49:13.078209 1914687 logs.go:123] Gathering logs for kube-controller-manager [bea2b4d6ce5d] ...
	I0804 09:49:13.078238 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bea2b4d6ce5d"
	I0804 09:49:13.099282 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:49:13.099309 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:49:13.123110 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:49:13.123144 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:49:13.148148 1914687 logs.go:123] Gathering logs for kube-apiserver [a25521cd2e4b] ...
	I0804 09:49:13.148174 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25521cd2e4b"
	I0804 09:49:15.674584 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:49:15.675126 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:49:15.675461 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:49:15.723147 1914687 logs.go:282] 2 containers: [a25521cd2e4b 3bf4e03f1d1e]
	I0804 09:49:15.723239 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:49:15.747020 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:49:15.747102 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:49:15.783534 1914687 logs.go:282] 0 containers: []
	W0804 09:49:15.783583 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:49:15.783654 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:49:15.825850 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:49:15.825935 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:49:15.847419 1914687 logs.go:282] 0 containers: []
	W0804 09:49:15.847444 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:49:15.847487 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:49:15.884210 1914687 logs.go:282] 2 containers: [bea2b4d6ce5d 4a93264af8b9]
	I0804 09:49:15.884305 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:49:15.928026 1914687 logs.go:282] 0 containers: []
	W0804 09:49:15.928057 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:49:15.928126 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:49:15.951428 1914687 logs.go:282] 0 containers: []
	W0804 09:49:15.951462 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:49:15.951477 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:49:15.951492 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:49:16.022762 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:49:16.022866 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:49:16.063439 1914687 logs.go:123] Gathering logs for kube-controller-manager [bea2b4d6ce5d] ...
	I0804 09:49:16.063536 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bea2b4d6ce5d"
	I0804 09:49:16.118270 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:49:16.118300 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:49:16.233153 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:49:16.233180 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:49:16.233197 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:49:16.295804 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:49:16.295856 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:49:16.338595 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:49:16.338630 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:49:16.382399 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:49:16.382435 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:49:16.452478 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:49:16.452512 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:49:16.513838 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:49:16.513871 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:49:16.615839 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:49:16.615887 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:49:16.647229 1914687 logs.go:123] Gathering logs for kube-apiserver [a25521cd2e4b] ...
	I0804 09:49:16.647273 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25521cd2e4b"
	I0804 09:49:19.191654 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:49:19.192099 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:49:19.192186 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:49:19.211021 1914687 logs.go:282] 2 containers: [a25521cd2e4b 3bf4e03f1d1e]
	I0804 09:49:19.211091 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:49:19.228207 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:49:19.228277 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:49:19.245440 1914687 logs.go:282] 0 containers: []
	W0804 09:49:19.245467 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:49:19.245522 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:49:19.262402 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:49:19.262486 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:49:19.279766 1914687 logs.go:282] 0 containers: []
	W0804 09:49:19.279790 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:49:19.279844 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:49:19.298702 1914687 logs.go:282] 2 containers: [bea2b4d6ce5d 4a93264af8b9]
	I0804 09:49:19.298785 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:49:19.317370 1914687 logs.go:282] 0 containers: []
	W0804 09:49:19.317394 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:49:19.317444 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:49:19.337207 1914687 logs.go:282] 0 containers: []
	W0804 09:49:19.337275 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:49:19.337294 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:49:19.337310 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:49:19.403491 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:49:19.403518 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:49:19.403533 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:49:19.430922 1914687 logs.go:123] Gathering logs for kube-controller-manager [bea2b4d6ce5d] ...
	I0804 09:49:19.430953 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bea2b4d6ce5d"
	I0804 09:49:19.456492 1914687 logs.go:123] Gathering logs for kube-apiserver [a25521cd2e4b] ...
	I0804 09:49:19.456527 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25521cd2e4b"
	I0804 09:49:19.486523 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:49:19.486553 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:49:19.538739 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:49:19.538783 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:49:19.565491 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:49:19.565525 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:49:19.628927 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:49:19.628982 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:49:19.659460 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:49:19.659501 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:49:19.686237 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:49:19.686268 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:49:19.730004 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:49:19.730048 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:49:19.824312 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:49:19.824349 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:49:22.348893 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:49:22.349380 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:49:22.349486 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:49:22.373820 1914687 logs.go:282] 2 containers: [a25521cd2e4b 3bf4e03f1d1e]
	I0804 09:49:22.373897 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:49:22.394977 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:49:22.395055 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:49:22.418275 1914687 logs.go:282] 0 containers: []
	W0804 09:49:22.418303 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:49:22.418355 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:49:22.438094 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:49:22.438177 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:49:22.456570 1914687 logs.go:282] 0 containers: []
	W0804 09:49:22.456589 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:49:22.456629 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:49:22.482343 1914687 logs.go:282] 2 containers: [bea2b4d6ce5d 4a93264af8b9]
	I0804 09:49:22.482433 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:49:22.505426 1914687 logs.go:282] 0 containers: []
	W0804 09:49:22.505450 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:49:22.505503 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:49:22.526422 1914687 logs.go:282] 0 containers: []
	W0804 09:49:22.526442 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:49:22.526454 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:49:22.526466 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:49:22.618265 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:49:22.618293 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:49:22.645876 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:49:22.645911 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:49:22.710569 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:49:22.710605 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:49:22.757419 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:49:22.757450 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:49:22.783862 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:49:22.783896 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:49:22.809357 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:49:22.809388 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:49:22.832995 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:49:22.833022 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:49:22.881226 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:49:22.881292 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:49:22.940393 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:49:22.940419 1914687 logs.go:123] Gathering logs for kube-apiserver [a25521cd2e4b] ...
	I0804 09:49:22.940431 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25521cd2e4b"
	I0804 09:49:22.974374 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:49:22.974403 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:49:23.002965 1914687 logs.go:123] Gathering logs for kube-controller-manager [bea2b4d6ce5d] ...
	I0804 09:49:23.002996 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bea2b4d6ce5d"
	I0804 09:49:25.527047 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:49:25.527502 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:49:25.527602 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:49:25.547628 1914687 logs.go:282] 2 containers: [a25521cd2e4b 3bf4e03f1d1e]
	I0804 09:49:25.547691 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:49:25.567681 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:49:25.567740 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:49:25.588237 1914687 logs.go:282] 0 containers: []
	W0804 09:49:25.588259 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:49:25.588308 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:49:25.608187 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:49:25.608268 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:49:25.633276 1914687 logs.go:282] 0 containers: []
	W0804 09:49:25.633303 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:49:25.633395 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:49:25.655102 1914687 logs.go:282] 2 containers: [bea2b4d6ce5d 4a93264af8b9]
	I0804 09:49:25.655179 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:49:25.676720 1914687 logs.go:282] 0 containers: []
	W0804 09:49:25.676762 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:49:25.676800 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:49:25.697654 1914687 logs.go:282] 0 containers: []
	W0804 09:49:25.697680 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:49:25.697693 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:49:25.697705 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:49:25.726037 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:49:25.726073 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:49:25.787267 1914687 logs.go:123] Gathering logs for kube-controller-manager [bea2b4d6ce5d] ...
	I0804 09:49:25.787303 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bea2b4d6ce5d"
	I0804 09:49:25.814177 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:49:25.814209 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:49:25.838530 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:49:25.838565 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:49:25.910857 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:49:25.910878 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:49:25.910896 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:49:25.941431 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:49:25.941462 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:49:25.972639 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:49:25.972683 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:49:26.030580 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:49:26.030611 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:49:26.130937 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:49:26.130973 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:49:26.170397 1914687 logs.go:123] Gathering logs for kube-apiserver [a25521cd2e4b] ...
	I0804 09:49:26.170429 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25521cd2e4b"
	I0804 09:49:26.200020 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:49:26.200052 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:49:28.754394 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:49:28.754900 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:49:28.755001 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:49:28.776926 1914687 logs.go:282] 2 containers: [a25521cd2e4b 3bf4e03f1d1e]
	I0804 09:49:28.777005 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:49:28.797767 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:49:28.797841 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:49:28.816493 1914687 logs.go:282] 0 containers: []
	W0804 09:49:28.816521 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:49:28.816581 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:49:28.836757 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:49:28.836852 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:49:28.856450 1914687 logs.go:282] 0 containers: []
	W0804 09:49:28.856481 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:49:28.856544 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:49:28.877620 1914687 logs.go:282] 2 containers: [bea2b4d6ce5d 4a93264af8b9]
	I0804 09:49:28.877692 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:49:28.896563 1914687 logs.go:282] 0 containers: []
	W0804 09:49:28.896590 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:49:28.896638 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:49:28.915923 1914687 logs.go:282] 0 containers: []
	W0804 09:49:28.915954 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:49:28.915969 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:49:28.915983 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:49:28.939820 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:49:28.939854 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:49:28.995426 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:49:28.995473 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:49:29.043721 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:49:29.043757 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:49:29.072033 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:49:29.072065 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:49:29.109639 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:49:29.109665 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:49:29.208375 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:49:29.208426 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:49:29.307578 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:49:29.307606 1914687 logs.go:123] Gathering logs for kube-apiserver [a25521cd2e4b] ...
	I0804 09:49:29.307633 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25521cd2e4b"
	I0804 09:49:29.335943 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:49:29.335974 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:49:29.360752 1914687 logs.go:123] Gathering logs for kube-controller-manager [bea2b4d6ce5d] ...
	I0804 09:49:29.360784 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bea2b4d6ce5d"
	I0804 09:49:29.382159 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:49:29.382192 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:49:29.407654 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:49:29.407684 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:49:31.931751 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:49:31.932287 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:49:31.932403 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:49:31.952419 1914687 logs.go:282] 2 containers: [a25521cd2e4b 3bf4e03f1d1e]
	I0804 09:49:31.952495 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:49:31.972160 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:49:31.972240 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:49:31.992378 1914687 logs.go:282] 0 containers: []
	W0804 09:49:31.992413 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:49:31.992478 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:49:32.011820 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:49:32.011909 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:49:32.032426 1914687 logs.go:282] 0 containers: []
	W0804 09:49:32.032460 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:49:32.032520 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:49:32.051955 1914687 logs.go:282] 2 containers: [bea2b4d6ce5d 4a93264af8b9]
	I0804 09:49:32.052046 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:49:32.072292 1914687 logs.go:282] 0 containers: []
	W0804 09:49:32.072320 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:49:32.072382 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:49:32.091969 1914687 logs.go:282] 0 containers: []
	W0804 09:49:32.091992 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:49:32.092003 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:49:32.092014 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:49:32.118907 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:49:32.118955 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:49:32.141729 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:49:32.141764 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:49:32.235776 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:49:32.235813 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:49:32.261032 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:49:32.261071 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:49:32.319275 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:49:32.319303 1914687 logs.go:123] Gathering logs for kube-apiserver [a25521cd2e4b] ...
	I0804 09:49:32.319317 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25521cd2e4b"
	I0804 09:49:32.345276 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:49:32.345304 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:49:32.368286 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:49:32.368321 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:49:32.391665 1914687 logs.go:123] Gathering logs for kube-controller-manager [bea2b4d6ce5d] ...
	I0804 09:49:32.391692 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bea2b4d6ce5d"
	I0804 09:49:32.412242 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:49:32.412267 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:49:32.448259 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:49:32.448288 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:49:32.499135 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:49:32.499177 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:49:35.050397 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:49:35.050874 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:49:35.050974 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 09:49:35.072357 1914687 logs.go:282] 2 containers: [a25521cd2e4b 3bf4e03f1d1e]
	I0804 09:49:35.072433 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 09:49:35.099026 1914687 logs.go:282] 1 containers: [61b35865b3b0]
	I0804 09:49:35.099110 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 09:49:35.121652 1914687 logs.go:282] 0 containers: []
	W0804 09:49:35.121686 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:49:35.121748 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 09:49:35.142999 1914687 logs.go:282] 2 containers: [6feb7cf6bbc2 cc5bed820423]
	I0804 09:49:35.143091 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 09:49:35.164385 1914687 logs.go:282] 0 containers: []
	W0804 09:49:35.164416 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:49:35.164481 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 09:49:35.187186 1914687 logs.go:282] 2 containers: [bea2b4d6ce5d 4a93264af8b9]
	I0804 09:49:35.187285 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 09:49:35.209482 1914687 logs.go:282] 0 containers: []
	W0804 09:49:35.209510 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:49:35.209570 1914687 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0804 09:49:35.231721 1914687 logs.go:282] 0 containers: []
	W0804 09:49:35.231754 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:49:35.231768 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:49:35.231782 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:49:35.348782 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:49:35.348815 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:49:35.376129 1914687 logs.go:123] Gathering logs for kube-apiserver [3bf4e03f1d1e] ...
	I0804 09:49:35.376157 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf4e03f1d1e"
	I0804 09:49:35.428745 1914687 logs.go:123] Gathering logs for kube-scheduler [6feb7cf6bbc2] ...
	I0804 09:49:35.428779 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6feb7cf6bbc2"
	I0804 09:49:35.484124 1914687 logs.go:123] Gathering logs for kube-scheduler [cc5bed820423] ...
	I0804 09:49:35.484162 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc5bed820423"
	I0804 09:49:35.513301 1914687 logs.go:123] Gathering logs for kube-controller-manager [bea2b4d6ce5d] ...
	I0804 09:49:35.513348 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bea2b4d6ce5d"
	I0804 09:49:35.540478 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:49:35.540519 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:49:35.566959 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:49:35.566989 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:49:35.632656 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:49:35.632676 1914687 logs.go:123] Gathering logs for kube-apiserver [a25521cd2e4b] ...
	I0804 09:49:35.632689 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25521cd2e4b"
	I0804 09:49:35.662729 1914687 logs.go:123] Gathering logs for etcd [61b35865b3b0] ...
	I0804 09:49:35.662760 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b35865b3b0"
	I0804 09:49:35.691551 1914687 logs.go:123] Gathering logs for kube-controller-manager [4a93264af8b9] ...
	I0804 09:49:35.691580 1914687 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a93264af8b9"
	I0804 09:49:35.718276 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:49:35.718306 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 09:49:38.257338 1914687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0804 09:49:38.257776 1914687 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:49:38.257857 1914687 kubeadm.go:593] duration metric: took 4m6.571422666s to restartPrimaryControlPlane
	W0804 09:49:38.257937 1914687 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0804 09:49:38.257966 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0804 09:49:39.062294 1914687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 09:49:39.073777 1914687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 09:49:39.081983 1914687 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0804 09:49:39.082038 1914687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 09:49:39.090023 1914687 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 09:49:39.090039 1914687 kubeadm.go:157] found existing configuration files:
	
	I0804 09:49:39.090081 1914687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 09:49:39.098159 1914687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 09:49:39.098208 1914687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 09:49:39.106550 1914687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 09:49:39.114754 1914687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 09:49:39.114811 1914687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 09:49:39.122498 1914687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 09:49:39.130333 1914687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 09:49:39.130379 1914687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 09:49:39.137907 1914687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 09:49:39.145777 1914687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 09:49:39.145830 1914687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 09:49:39.153511 1914687 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0804 09:49:39.188517 1914687 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0-beta.0
	I0804 09:49:39.188611 1914687 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 09:49:39.202965 1914687 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0804 09:49:39.203047 1914687 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0804 09:49:39.203080 1914687 kubeadm.go:310] OS: Linux
	I0804 09:49:39.203166 1914687 kubeadm.go:310] CGROUPS_CPU: enabled
	I0804 09:49:39.203238 1914687 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0804 09:49:39.203308 1914687 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0804 09:49:39.203348 1914687 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0804 09:49:39.203455 1914687 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0804 09:49:39.203543 1914687 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0804 09:49:39.203593 1914687 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0804 09:49:39.203670 1914687 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0804 09:49:39.203744 1914687 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0804 09:49:39.258924 1914687 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 09:49:39.259077 1914687 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 09:49:39.259227 1914687 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 09:49:41.809078 1914687 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 09:49:41.810634 1914687 out.go:235]   - Generating certificates and keys ...
	I0804 09:49:41.810760 1914687 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 09:49:41.810867 1914687 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 09:49:41.810988 1914687 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 09:49:41.811088 1914687 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 09:49:41.811147 1914687 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 09:49:41.811203 1914687 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 09:49:41.811270 1914687 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 09:49:41.811331 1914687 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 09:49:41.811394 1914687 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 09:49:41.811451 1914687 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 09:49:41.811483 1914687 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 09:49:41.811527 1914687 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 09:49:42.155112 1914687 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 09:49:42.443305 1914687 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 09:49:42.703480 1914687 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 09:49:43.251735 1914687 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 09:49:43.668421 1914687 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 09:49:43.668985 1914687 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 09:49:43.671027 1914687 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 09:49:43.672717 1914687 out.go:235]   - Booting up control plane ...
	I0804 09:49:43.672829 1914687 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 09:49:43.672943 1914687 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 09:49:43.673030 1914687 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 09:49:43.684917 1914687 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 09:49:43.685060 1914687 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0804 09:49:43.691078 1914687 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0804 09:49:43.692003 1914687 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 09:49:43.692053 1914687 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 09:49:43.780528 1914687 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 09:49:43.780693 1914687 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 09:49:44.282033 1914687 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.670892ms
	I0804 09:49:44.284576 1914687 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0804 09:49:44.284681 1914687 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0804 09:49:44.284812 1914687 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0804 09:49:44.284955 1914687 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0804 09:49:46.289637 1914687 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.004924176s
	I0804 09:50:07.890319 1914687 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 23.60554279s
	I0804 09:53:44.285286 1914687 kubeadm.go:310] [control-plane-check] kube-apiserver is not healthy after 4m0.000442994s
	I0804 09:53:44.285357 1914687 kubeadm.go:310] 
	I0804 09:53:44.285501 1914687 kubeadm.go:310] A control plane component may have crashed or exited when started by the container runtime.
	I0804 09:53:44.285633 1914687 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 09:53:44.285796 1914687 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0804 09:53:44.285953 1914687 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	I0804 09:53:44.286057 1914687 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0804 09:53:44.286188 1914687 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	I0804 09:53:44.286197 1914687 kubeadm.go:310] 
	I0804 09:53:44.290160 1914687 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0804 09:53:44.290513 1914687 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0804 09:53:44.290676 1914687 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 09:53:44.291082 1914687 kubeadm.go:310] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused
	I0804 09:53:44.291215 1914687 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0804 09:53:44.291416 1914687 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.670892ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.004924176s
	[control-plane-check] kube-scheduler is healthy after 23.60554279s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000442994s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.670892ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.004924176s
	[control-plane-check] kube-scheduler is healthy after 23.60554279s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000442994s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	I0804 09:53:44.291477 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0804 09:53:47.034285 1914687 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.742777771s)
	I0804 09:53:47.034357 1914687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 09:53:47.045873 1914687 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0804 09:53:47.045951 1914687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 09:53:47.055532 1914687 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 09:53:47.055561 1914687 kubeadm.go:157] found existing configuration files:
	
	I0804 09:53:47.055610 1914687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 09:53:47.065009 1914687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 09:53:47.065067 1914687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 09:53:47.074306 1914687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 09:53:47.082842 1914687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 09:53:47.082894 1914687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 09:53:47.090884 1914687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 09:53:47.100335 1914687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 09:53:47.100384 1914687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 09:53:47.111850 1914687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 09:53:47.123145 1914687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 09:53:47.123205 1914687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 09:53:47.136216 1914687 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0804 09:53:47.193634 1914687 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0-beta.0
	I0804 09:53:47.193828 1914687 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 09:53:47.210961 1914687 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0804 09:53:47.211066 1914687 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0804 09:53:47.211123 1914687 kubeadm.go:310] OS: Linux
	I0804 09:53:47.211193 1914687 kubeadm.go:310] CGROUPS_CPU: enabled
	I0804 09:53:47.211265 1914687 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0804 09:53:47.211345 1914687 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0804 09:53:47.211409 1914687 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0804 09:53:47.211467 1914687 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0804 09:53:47.211532 1914687 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0804 09:53:47.211594 1914687 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0804 09:53:47.211658 1914687 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0804 09:53:47.211724 1914687 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0804 09:53:47.290068 1914687 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 09:53:47.290173 1914687 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 09:53:47.290252 1914687 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 09:53:47.304066 1914687 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 09:53:47.305617 1914687 out.go:235]   - Generating certificates and keys ...
	I0804 09:53:47.305761 1914687 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 09:53:47.305855 1914687 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 09:53:47.305957 1914687 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 09:53:47.306041 1914687 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 09:53:47.306142 1914687 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 09:53:47.306212 1914687 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 09:53:47.306298 1914687 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 09:53:47.306385 1914687 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 09:53:47.306490 1914687 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 09:53:47.306593 1914687 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 09:53:47.306651 1914687 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 09:53:47.306731 1914687 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 09:53:47.992755 1914687 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 09:53:48.722619 1914687 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 09:53:48.973906 1914687 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 09:53:49.351771 1914687 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 09:53:49.639561 1914687 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 09:53:49.640122 1914687 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 09:53:49.642192 1914687 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 09:53:49.718435 1914687 out.go:235]   - Booting up control plane ...
	I0804 09:53:49.718595 1914687 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 09:53:49.718771 1914687 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 09:53:49.718876 1914687 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 09:53:49.719032 1914687 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 09:53:49.719187 1914687 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0804 09:53:49.719332 1914687 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0804 09:53:49.719462 1914687 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 09:53:49.719522 1914687 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 09:53:49.753286 1914687 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 09:53:49.753455 1914687 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 09:53:50.254884 1914687 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.745199ms
	I0804 09:53:50.257748 1914687 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0804 09:53:50.258099 1914687 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0804 09:53:50.258232 1914687 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0804 09:53:50.258363 1914687 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0804 09:53:52.762324 1914687 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.504522425s
	I0804 09:54:23.124771 1914687 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 32.866964303s
	I0804 09:57:50.258904 1914687 kubeadm.go:310] [control-plane-check] kube-apiserver is not healthy after 4m0.000987094s
	I0804 09:57:50.258951 1914687 kubeadm.go:310] 
	I0804 09:57:50.259098 1914687 kubeadm.go:310] A control plane component may have crashed or exited when started by the container runtime.
	I0804 09:57:50.259231 1914687 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 09:57:50.259350 1914687 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0804 09:57:50.259468 1914687 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	I0804 09:57:50.259566 1914687 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0804 09:57:50.259701 1914687 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	I0804 09:57:50.259720 1914687 kubeadm.go:310] 
	I0804 09:57:50.262393 1914687 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0804 09:57:50.262641 1914687 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0804 09:57:50.262798 1914687 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 09:57:50.263102 1914687 kubeadm.go:310] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I0804 09:57:50.263213 1914687 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 09:57:50.263311 1914687 kubeadm.go:394] duration metric: took 12m18.61154147s to StartCluster
	I0804 09:57:50.263367 1914687 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 09:57:50.263425 1914687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 09:57:50.302849 1914687 cri.go:89] found id: "df06cba5b9c28ae26a422996b6810d9bf6e1ec9d76bb921f463ec39a4953c8d1"
	I0804 09:57:50.302878 1914687 cri.go:89] found id: ""
	I0804 09:57:50.302888 1914687 logs.go:282] 1 containers: [df06cba5b9c28ae26a422996b6810d9bf6e1ec9d76bb921f463ec39a4953c8d1]
	I0804 09:57:50.302945 1914687 ssh_runner.go:195] Run: which crictl
	I0804 09:57:50.307064 1914687 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 09:57:50.307136 1914687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 09:57:50.340445 1914687 cri.go:89] found id: "db5c13daaf7ce7f0d0d0e95907cdfe123837200d44e9c99eadf311ce4e98e7e6"
	I0804 09:57:50.340467 1914687 cri.go:89] found id: ""
	I0804 09:57:50.340475 1914687 logs.go:282] 1 containers: [db5c13daaf7ce7f0d0d0e95907cdfe123837200d44e9c99eadf311ce4e98e7e6]
	I0804 09:57:50.340515 1914687 ssh_runner.go:195] Run: which crictl
	I0804 09:57:50.343804 1914687 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 09:57:50.343855 1914687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 09:57:50.377703 1914687 cri.go:89] found id: ""
	I0804 09:57:50.377732 1914687 logs.go:282] 0 containers: []
	W0804 09:57:50.377743 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:57:50.377752 1914687 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 09:57:50.377813 1914687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 09:57:50.413120 1914687 cri.go:89] found id: "85f53e1b115a8cbdcabdd536f25da2b7dd2c2ad63fcf8505995c27e1e7690863"
	I0804 09:57:50.413146 1914687 cri.go:89] found id: ""
	I0804 09:57:50.413155 1914687 logs.go:282] 1 containers: [85f53e1b115a8cbdcabdd536f25da2b7dd2c2ad63fcf8505995c27e1e7690863]
	I0804 09:57:50.413208 1914687 ssh_runner.go:195] Run: which crictl
	I0804 09:57:50.416921 1914687 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 09:57:50.416981 1914687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 09:57:50.457153 1914687 cri.go:89] found id: ""
	I0804 09:57:50.457177 1914687 logs.go:282] 0 containers: []
	W0804 09:57:50.457185 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:57:50.457190 1914687 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 09:57:50.457273 1914687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 09:57:50.497723 1914687 cri.go:89] found id: "1987f93e651ec629976fe3a2f8d2144200451700819d2a280453744e9c9755ae"
	I0804 09:57:50.497747 1914687 cri.go:89] found id: ""
	I0804 09:57:50.497758 1914687 logs.go:282] 1 containers: [1987f93e651ec629976fe3a2f8d2144200451700819d2a280453744e9c9755ae]
	I0804 09:57:50.497802 1914687 ssh_runner.go:195] Run: which crictl
	I0804 09:57:50.501780 1914687 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 09:57:50.501850 1914687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 09:57:50.539775 1914687 cri.go:89] found id: ""
	I0804 09:57:50.539798 1914687 logs.go:282] 0 containers: []
	W0804 09:57:50.539806 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:57:50.539811 1914687 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 09:57:50.539851 1914687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 09:57:50.575765 1914687 cri.go:89] found id: ""
	I0804 09:57:50.575792 1914687 logs.go:282] 0 containers: []
	W0804 09:57:50.575802 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:57:50.575824 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:57:50.575838 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:57:50.631767 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:57:50.631802 1914687 logs.go:123] Gathering logs for kube-apiserver [df06cba5b9c28ae26a422996b6810d9bf6e1ec9d76bb921f463ec39a4953c8d1] ...
	I0804 09:57:50.631816 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df06cba5b9c28ae26a422996b6810d9bf6e1ec9d76bb921f463ec39a4953c8d1"
	I0804 09:57:50.673833 1914687 logs.go:123] Gathering logs for etcd [db5c13daaf7ce7f0d0d0e95907cdfe123837200d44e9c99eadf311ce4e98e7e6] ...
	I0804 09:57:50.673862 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db5c13daaf7ce7f0d0d0e95907cdfe123837200d44e9c99eadf311ce4e98e7e6"
	I0804 09:57:50.713861 1914687 logs.go:123] Gathering logs for kube-scheduler [85f53e1b115a8cbdcabdd536f25da2b7dd2c2ad63fcf8505995c27e1e7690863] ...
	I0804 09:57:50.713888 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85f53e1b115a8cbdcabdd536f25da2b7dd2c2ad63fcf8505995c27e1e7690863"
	I0804 09:57:50.782670 1914687 logs.go:123] Gathering logs for kube-controller-manager [1987f93e651ec629976fe3a2f8d2144200451700819d2a280453744e9c9755ae] ...
	I0804 09:57:50.782708 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1987f93e651ec629976fe3a2f8d2144200451700819d2a280453744e9c9755ae"
	I0804 09:57:50.821748 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:57:50.821774 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:57:50.911276 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:57:50.911313 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:57:50.938627 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:57:50.938659 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:57:50.973015 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:57:50.973046 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 09:57:51.013375 1914687 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.745199ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.504522425s
	[control-plane-check] kube-scheduler is healthy after 32.866964303s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000987094s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	W0804 09:57:51.013440 1914687 out.go:270] * 
	* 
	W0804 09:57:51.013521 1914687 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.745199ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.504522425s
	[control-plane-check] kube-scheduler is healthy after 32.866964303s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000987094s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.745199ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.504522425s
	[control-plane-check] kube-scheduler is healthy after 32.866964303s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000987094s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 09:57:51.013543 1914687 out.go:270] * 
	* 
	W0804 09:57:51.015357 1914687 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 09:57:51.019682 1914687 out.go:201] 
	W0804 09:57:51.020752 1914687 out.go:270] X Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.745199ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.504522425s
	[control-plane-check] kube-scheduler is healthy after 32.866964303s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000987094s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.745199ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.504522425s
	[control-plane-check] kube-scheduler is healthy after 32.866964303s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000987094s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 09:57:51.020788 1914687 out.go:270] * 
	* 
	W0804 09:57:51.022725 1914687 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 09:57:51.023892 1914687 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-amd64 start -p kubernetes-upgrade-402519 --memory=3072 --kubernetes-version=v1.34.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-402519 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-402519 version --output=json: exit status 1 (52.62429ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/amd64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.85.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:631: *** TestKubernetesUpgrade FAILED at 2025-08-04 09:57:51.307093242 +0000 UTC m=+5013.697916999
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-402519
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-402519:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dda5bc38075834bb5497aa97db60f97eeca94d4ea5b119fb8850d191d4eb4258",
	        "Created": "2025-08-04T09:44:37.07552047Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1914875,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T09:45:11.154983714Z",
	            "FinishedAt": "2025-08-04T09:45:10.483445506Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/dda5bc38075834bb5497aa97db60f97eeca94d4ea5b119fb8850d191d4eb4258/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dda5bc38075834bb5497aa97db60f97eeca94d4ea5b119fb8850d191d4eb4258/hostname",
	        "HostsPath": "/var/lib/docker/containers/dda5bc38075834bb5497aa97db60f97eeca94d4ea5b119fb8850d191d4eb4258/hosts",
	        "LogPath": "/var/lib/docker/containers/dda5bc38075834bb5497aa97db60f97eeca94d4ea5b119fb8850d191d4eb4258/dda5bc38075834bb5497aa97db60f97eeca94d4ea5b119fb8850d191d4eb4258-json.log",
	        "Name": "/kubernetes-upgrade-402519",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-402519:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "kubernetes-upgrade-402519",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dda5bc38075834bb5497aa97db60f97eeca94d4ea5b119fb8850d191d4eb4258",
	                "LowerDir": "/var/lib/docker/overlay2/512e577e3674bebca1b9fcdb0a4e784d585eaae2ecb292734a176369e8025f6f-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/512e577e3674bebca1b9fcdb0a4e784d585eaae2ecb292734a176369e8025f6f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/512e577e3674bebca1b9fcdb0a4e784d585eaae2ecb292734a176369e8025f6f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/512e577e3674bebca1b9fcdb0a4e784d585eaae2ecb292734a176369e8025f6f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-402519",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-402519/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-402519",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-402519",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-402519",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c2e53afbf29a666b6fd9e32ce658cd67ec174a88ae7fe502c8a61e83bd25ac9",
	            "SandboxKey": "/var/run/docker/netns/7c2e53afbf29",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32998"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32999"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33002"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33000"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33001"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-402519": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:d0:3a:1e:53:77",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7a718837c1126e812a9488420f3b880ce60184fa2f06b808d0b5a8bdd3e64ab7",
	                    "EndpointID": "bfb194ac11e4ed23b85969a4674309be087c6f0b9b9784a2bb45397c158bb135",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-402519",
	                        "dda5bc380758"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-402519 -n kubernetes-upgrade-402519
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-402519 -n kubernetes-upgrade-402519: exit status 2 (283.663465ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-402519 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────
────────────────┐
	│ COMMAND │                                                                                                                          ARGS                                                                                                                          │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────
────────────────┤
	│ delete  │ -p old-k8s-version-304259                                                                                                                                                                                                                              │ old-k8s-version-304259       │ jenkins │ v1.36.0 │ 04 Aug 25 09:54 UTC │ 04 Aug 25 09:54 UTC │
	│ delete  │ -p old-k8s-version-304259                                                                                                                                                                                                                              │ old-k8s-version-304259       │ jenkins │ v1.36.0 │ 04 Aug 25 09:54 UTC │ 04 Aug 25 09:54 UTC │
	│ start   │ -p newest-cni-768931 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0 │ newest-cni-768931            │ jenkins │ v1.36.0 │ 04 Aug 25 09:54 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-670157 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ default-k8s-diff-port-670157 │ jenkins │ v1.36.0 │ 04 Aug 25 09:54 UTC │ 04 Aug 25 09:55 UTC │
	│ stop    │ -p default-k8s-diff-port-670157 --alsologtostderr -v=3                                                                                                                                                                                                 │ default-k8s-diff-port-670157 │ jenkins │ v1.36.0 │ 04 Aug 25 09:55 UTC │ 04 Aug 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-670157 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ default-k8s-diff-port-670157 │ jenkins │ v1.36.0 │ 04 Aug 25 09:55 UTC │ 04 Aug 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-670157 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.3                                                                             │ default-k8s-diff-port-670157 │ jenkins │ v1.36.0 │ 04 Aug 25 09:55 UTC │ 04 Aug 25 09:56 UTC │
	│ image   │ default-k8s-diff-port-670157 image list --format=json                                                                                                                                                                                                  │ default-k8s-diff-port-670157 │ jenkins │ v1.36.0 │ 04 Aug 25 09:56 UTC │ 04 Aug 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-670157 --alsologtostderr -v=1                                                                                                                                                                                                 │ default-k8s-diff-port-670157 │ jenkins │ v1.36.0 │ 04 Aug 25 09:56 UTC │ 04 Aug 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-670157 --alsologtostderr -v=1                                                                                                                                                                                                 │ default-k8s-diff-port-670157 │ jenkins │ v1.36.0 │ 04 Aug 25 09:56 UTC │ 04 Aug 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-670157                                                                                                                                                                                                                        │ default-k8s-diff-port-670157 │ jenkins │ v1.36.0 │ 04 Aug 25 09:56 UTC │ 04 Aug 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-670157                                                                                                                                                                                                                        │ default-k8s-diff-port-670157 │ jenkins │ v1.36.0 │ 04 Aug 25 09:56 UTC │ 04 Aug 25 09:56 UTC │
	│ start   │ -p auto-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker                                                                                                                              │ auto-561540                  │ jenkins │ v1.36.0 │ 04 Aug 25 09:56 UTC │ 04 Aug 25 09:57 UTC │
	│ ssh     │ -p auto-561540 pgrep -a kubelet                                                                                                                                                                                                                        │ auto-561540                  │ jenkins │ v1.36.0 │ 04 Aug 25 09:57 UTC │ 04 Aug 25 09:57 UTC │
	│ ssh     │ -p auto-561540 sudo cat /etc/nsswitch.conf                                                                                                                                                                                                             │ auto-561540                  │ jenkins │ v1.36.0 │ 04 Aug 25 09:57 UTC │ 04 Aug 25 09:57 UTC │
	│ ssh     │ -p auto-561540 sudo cat /etc/hosts                                                                                                                                                                                                                     │ auto-561540                  │ jenkins │ v1.36.0 │ 04 Aug 25 09:57 UTC │ 04 Aug 25 09:57 UTC │
	│ ssh     │ -p auto-561540 sudo cat /etc/resolv.conf                                                                                                                                                                                                               │ auto-561540                  │ jenkins │ v1.36.0 │ 04 Aug 25 09:57 UTC │ 04 Aug 25 09:57 UTC │
	│ ssh     │ -p auto-561540 sudo crictl pods                                                                                                                                                                                                                        │ auto-561540                  │ jenkins │ v1.36.0 │ 04 Aug 25 09:57 UTC │ 04 Aug 25 09:57 UTC │
	│ ssh     │ -p auto-561540 sudo crictl ps --all                                                                                                                                                                                                                    │ auto-561540                  │ jenkins │ v1.36.0 │ 04 Aug 25 09:57 UTC │ 04 Aug 25 09:57 UTC │
	│ ssh     │ -p auto-561540 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                             │ auto-561540                  │ jenkins │ v1.36.0 │ 04 Aug 25 09:57 UTC │ 04 Aug 25 09:57 UTC │
	│ ssh     │ -p auto-561540 sudo ip a s                                                                                                                                                                                                                             │ auto-561540                  │ jenkins │ v1.36.0 │ 04 Aug 25 09:57 UTC │ 04 Aug 25 09:57 UTC │
	│ ssh     │ -p auto-561540 sudo ip r s                                                                                                                                                                                                                             │ auto-561540                  │ jenkins │ v1.36.0 │ 04 Aug 25 09:57 UTC │ 04 Aug 25 09:57 UTC │
	│ ssh     │ -p auto-561540 sudo iptables-save                                                                                                                                                                                                                      │ auto-561540                  │ jenkins │ v1.36.0 │ 04 Aug 25 09:57 UTC │ 04 Aug 25 09:57 UTC │
	│ ssh     │ -p auto-561540 sudo iptables -t nat -L -n -v                                                                                                                                                                                                           │ auto-561540                  │ jenkins │ v1.36.0 │ 04 Aug 25 09:57 UTC │ 04 Aug 25 09:57 UTC │
	│ ssh     │ -p auto-561540 sudo systemctl status kubelet --all --full --no-pager                                                                                                                                                                                   │ auto-561540                  │ jenkins │ v1.36.0 │ 04 Aug 25 09:57 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────
────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 09:56:22
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 09:56:22.798730 2057789 out.go:345] Setting OutFile to fd 1 ...
	I0804 09:56:22.799009 2057789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:56:22.799019 2057789 out.go:358] Setting ErrFile to fd 2...
	I0804 09:56:22.799024 2057789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:56:22.799232 2057789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 09:56:22.799813 2057789 out.go:352] Setting JSON to false
	I0804 09:56:22.801005 2057789 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":153472,"bootTime":1754147911,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 09:56:22.801094 2057789 start.go:140] virtualization: kvm guest
	I0804 09:56:22.802977 2057789 out.go:177] * [auto-561540] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 09:56:22.804108 2057789 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 09:56:22.804123 2057789 notify.go:220] Checking for updates...
	I0804 09:56:22.805970 2057789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 09:56:22.807813 2057789 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 09:56:22.808885 2057789 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 09:56:22.809908 2057789 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 09:56:22.811031 2057789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 09:56:22.812362 2057789 config.go:182] Loaded profile config "kubernetes-upgrade-402519": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:56:22.812463 2057789 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:56:22.812557 2057789 config.go:182] Loaded profile config "no-preload-499486": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:56:22.812641 2057789 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 09:56:22.834689 2057789 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 09:56:22.834784 2057789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:56:22.884217 2057789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:73 SystemTime:2025-08-04 09:56:22.875509599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:56:22.884354 2057789 docker.go:318] overlay module found
	I0804 09:56:22.885825 2057789 out.go:177] * Using the docker driver based on user configuration
	I0804 09:56:22.886772 2057789 start.go:304] selected driver: docker
	I0804 09:56:22.886787 2057789 start.go:918] validating driver "docker" against <nil>
	I0804 09:56:22.886798 2057789 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 09:56:22.887619 2057789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:56:22.937519 2057789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:73 SystemTime:2025-08-04 09:56:22.928769115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:56:22.937748 2057789 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0804 09:56:22.938030 2057789 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 09:56:22.939470 2057789 out.go:177] * Using Docker driver with root privileges
	I0804 09:56:22.940380 2057789 cni.go:84] Creating CNI manager for ""
	I0804 09:56:22.940477 2057789 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:56:22.940492 2057789 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0804 09:56:22.940571 2057789 start.go:348] cluster config:
	{Name:auto-561540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.3 ClusterName:auto-561540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:56:22.941621 2057789 out.go:177] * Starting "auto-561540" primary control-plane node in "auto-561540" cluster
	I0804 09:56:22.942436 2057789 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 09:56:22.943455 2057789 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 09:56:22.944324 2057789 preload.go:131] Checking if preload exists for k8s version v1.33.3 and runtime docker
	I0804 09:56:22.944362 2057789 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.3-docker-overlay2-amd64.tar.lz4
	I0804 09:56:22.944371 2057789 cache.go:56] Caching tarball of preloaded images
	I0804 09:56:22.944420 2057789 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 09:56:22.944453 2057789 preload.go:172] Found /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 09:56:22.944464 2057789 cache.go:59] Finished verifying existence of preloaded tar for v1.33.3 on docker
	I0804 09:56:22.944542 2057789 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/config.json ...
	I0804 09:56:22.944559 2057789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/config.json: {Name:mk84dba3bf906a38e1750533ec00a340fd03655d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:56:22.964170 2057789 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 09:56:22.964189 2057789 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 09:56:22.964205 2057789 cache.go:230] Successfully downloaded all kic artifacts
	I0804 09:56:22.964233 2057789 start.go:360] acquireMachinesLock for auto-561540: {Name:mke70cb4c7d42381fe976f8c87260fd965d5a249 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 09:56:22.964356 2057789 start.go:364] duration metric: took 100.575µs to acquireMachinesLock for "auto-561540"
	I0804 09:56:22.964393 2057789 start.go:93] Provisioning new machine with config: &{Name:auto-561540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.3 ClusterName:auto-561540 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disable
CoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 09:56:22.964478 2057789 start.go:125] createHost starting for "" (driver="docker")
	I0804 09:56:22.966166 2057789 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0804 09:56:22.966396 2057789 start.go:159] libmachine.API.Create for "auto-561540" (driver="docker")
	I0804 09:56:22.966428 2057789 client.go:168] LocalClient.Create starting
	I0804 09:56:22.966523 2057789 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem
	I0804 09:56:22.966561 2057789 main.go:141] libmachine: Decoding PEM data...
	I0804 09:56:22.966582 2057789 main.go:141] libmachine: Parsing certificate...
	I0804 09:56:22.966650 2057789 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem
	I0804 09:56:22.966685 2057789 main.go:141] libmachine: Decoding PEM data...
	I0804 09:56:22.966701 2057789 main.go:141] libmachine: Parsing certificate...
	I0804 09:56:22.967024 2057789 cli_runner.go:164] Run: docker network inspect auto-561540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0804 09:56:22.982807 2057789 cli_runner.go:211] docker network inspect auto-561540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0804 09:56:22.982881 2057789 network_create.go:284] running [docker network inspect auto-561540] to gather additional debugging logs...
	I0804 09:56:22.982903 2057789 cli_runner.go:164] Run: docker network inspect auto-561540
	W0804 09:56:22.998457 2057789 cli_runner.go:211] docker network inspect auto-561540 returned with exit code 1
	I0804 09:56:22.998490 2057789 network_create.go:287] error running [docker network inspect auto-561540]: docker network inspect auto-561540: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-561540 not found
	I0804 09:56:22.998501 2057789 network_create.go:289] output of [docker network inspect auto-561540]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-561540 not found
	
	** /stderr **
	I0804 09:56:22.998572 2057789 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 09:56:23.015671 2057789 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b4122743d943 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:66:3d:c4:8d:93} reservation:<nil>}
	I0804 09:56:23.016420 2057789 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8451716aa30c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:1d:5b:3c:f6:bd} reservation:<nil>}
	I0804 09:56:23.017102 2057789 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9d42b63aa0b7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3e:9d:f7:36:38:48} reservation:<nil>}
	I0804 09:56:23.017736 2057789 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b469f2b8beae IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:f6:36:08:bc:59} reservation:<nil>}
	I0804 09:56:23.018467 2057789 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-7a718837c112 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:36:4c:e3:06:6c:d3} reservation:<nil>}
	I0804 09:56:23.018993 2057789 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-b62d1a983196 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:46:73:5b:f0:69:6b} reservation:<nil>}
	I0804 09:56:23.019843 2057789 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e33720}
	I0804 09:56:23.019870 2057789 network_create.go:124] attempt to create docker network auto-561540 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0804 09:56:23.019932 2057789 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-561540 auto-561540
	I0804 09:56:23.071833 2057789 network_create.go:108] docker network auto-561540 192.168.103.0/24 created
	I0804 09:56:23.071872 2057789 kic.go:121] calculated static IP "192.168.103.2" for the "auto-561540" container
	I0804 09:56:23.071929 2057789 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0804 09:56:23.089014 2057789 cli_runner.go:164] Run: docker volume create auto-561540 --label name.minikube.sigs.k8s.io=auto-561540 --label created_by.minikube.sigs.k8s.io=true
	I0804 09:56:23.106689 2057789 oci.go:103] Successfully created a docker volume auto-561540
	I0804 09:56:23.106765 2057789 cli_runner.go:164] Run: docker run --rm --name auto-561540-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-561540 --entrypoint /usr/bin/test -v auto-561540:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d -d /var/lib
	I0804 09:56:23.518994 2057789 oci.go:107] Successfully prepared a docker volume auto-561540
	I0804 09:56:23.519031 2057789 preload.go:131] Checking if preload exists for k8s version v1.33.3 and runtime docker
	I0804 09:56:23.519057 2057789 kic.go:194] Starting extracting preloaded images to volume ...
	I0804 09:56:23.519112 2057789 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-561540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d -I lz4 -xf /preloaded.tar -C /extractDir
	I0804 09:56:27.674727 2057789 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-561540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d -I lz4 -xf /preloaded.tar -C /extractDir: (4.155564516s)
	I0804 09:56:27.674758 2057789 kic.go:203] duration metric: took 4.155698593s to extract preloaded images to volume ...
	W0804 09:56:27.674927 2057789 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0804 09:56:27.675026 2057789 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0804 09:56:27.725344 2057789 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-561540 --name auto-561540 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-561540 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-561540 --network auto-561540 --ip 192.168.103.2 --volume auto-561540:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d
	I0804 09:56:27.968164 2057789 cli_runner.go:164] Run: docker container inspect auto-561540 --format={{.State.Running}}
	I0804 09:56:27.987363 2057789 cli_runner.go:164] Run: docker container inspect auto-561540 --format={{.State.Status}}
	I0804 09:56:28.006181 2057789 cli_runner.go:164] Run: docker exec auto-561540 stat /var/lib/dpkg/alternatives/iptables
	I0804 09:56:28.045845 2057789 oci.go:144] the created container "auto-561540" has a running status.
	I0804 09:56:28.045882 2057789 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/auto-561540/id_rsa...
	I0804 09:56:28.513491 2057789 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/auto-561540/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0804 09:56:28.542515 2057789 cli_runner.go:164] Run: docker container inspect auto-561540 --format={{.State.Status}}
	I0804 09:56:28.559939 2057789 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0804 09:56:28.559958 2057789 kic_runner.go:114] Args: [docker exec --privileged auto-561540 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0804 09:56:28.598747 2057789 cli_runner.go:164] Run: docker container inspect auto-561540 --format={{.State.Status}}
	I0804 09:56:28.615271 2057789 machine.go:93] provisionDockerMachine start ...
	I0804 09:56:28.615352 2057789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-561540
	I0804 09:56:28.632366 2057789 main.go:141] libmachine: Using SSH client type: native
	I0804 09:56:28.632670 2057789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0804 09:56:28.632691 2057789 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 09:56:28.756561 2057789 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-561540
	
	I0804 09:56:28.756589 2057789 ubuntu.go:169] provisioning hostname "auto-561540"
	I0804 09:56:28.756640 2057789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-561540
	I0804 09:56:28.773775 2057789 main.go:141] libmachine: Using SSH client type: native
	I0804 09:56:28.774048 2057789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0804 09:56:28.774063 2057789 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-561540 && echo "auto-561540" | sudo tee /etc/hostname
	I0804 09:56:28.907924 2057789 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-561540
	
	I0804 09:56:28.908008 2057789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-561540
	I0804 09:56:28.925512 2057789 main.go:141] libmachine: Using SSH client type: native
	I0804 09:56:28.925710 2057789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0804 09:56:28.925728 2057789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-561540' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-561540/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-561540' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 09:56:29.049272 2057789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 09:56:29.049313 2057789 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 09:56:29.049345 2057789 ubuntu.go:177] setting up certificates
	I0804 09:56:29.049367 2057789 provision.go:84] configureAuth start
	I0804 09:56:29.049460 2057789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-561540
	I0804 09:56:29.067077 2057789 provision.go:143] copyHostCerts
	I0804 09:56:29.067136 2057789 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 09:56:29.067146 2057789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 09:56:29.067207 2057789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 09:56:29.067293 2057789 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 09:56:29.067302 2057789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 09:56:29.067325 2057789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 09:56:29.067419 2057789 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 09:56:29.067427 2057789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 09:56:29.067451 2057789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 09:56:29.067509 2057789 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.auto-561540 san=[127.0.0.1 192.168.103.2 auto-561540 localhost minikube]
	I0804 09:56:29.338939 2057789 provision.go:177] copyRemoteCerts
	I0804 09:56:29.339004 2057789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 09:56:29.339042 2057789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-561540
	I0804 09:56:29.356723 2057789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/auto-561540/id_rsa Username:docker}
	I0804 09:56:29.445517 2057789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 09:56:29.467341 2057789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0804 09:56:29.489618 2057789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 09:56:29.510688 2057789 provision.go:87] duration metric: took 461.303013ms to configureAuth
	I0804 09:56:29.510718 2057789 ubuntu.go:193] setting minikube options for container-runtime
	I0804 09:56:29.510875 2057789 config.go:182] Loaded profile config "auto-561540": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
	I0804 09:56:29.510920 2057789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-561540
	I0804 09:56:29.527359 2057789 main.go:141] libmachine: Using SSH client type: native
	I0804 09:56:29.527606 2057789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0804 09:56:29.527620 2057789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 09:56:29.649534 2057789 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 09:56:29.649563 2057789 ubuntu.go:71] root file system type: overlay
	I0804 09:56:29.649718 2057789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 09:56:29.649792 2057789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-561540
	I0804 09:56:29.667594 2057789 main.go:141] libmachine: Using SSH client type: native
	I0804 09:56:29.667851 2057789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0804 09:56:29.667942 2057789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 09:56:29.799933 2057789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 09:56:29.800005 2057789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-561540
	I0804 09:56:29.817763 2057789 main.go:141] libmachine: Using SSH client type: native
	I0804 09:56:29.817993 2057789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0804 09:56:29.818012 2057789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 09:56:30.553517 2057789 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-07-25 11:32:36.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-08-04 09:56:29.793255369 +0000
	@@ -1,38 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	 StartLimitBurst=3
	 StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	+Restart=on-failure
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	 ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0804 09:56:30.553552 2057789 machine.go:96] duration metric: took 1.938259424s to provisionDockerMachine
	I0804 09:56:30.553566 2057789 client.go:171] duration metric: took 7.587127821s to LocalClient.Create
	I0804 09:56:30.553593 2057789 start.go:167] duration metric: took 7.587197378s to libmachine.API.Create "auto-561540"
	I0804 09:56:30.553606 2057789 start.go:293] postStartSetup for "auto-561540" (driver="docker")
	I0804 09:56:30.553623 2057789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 09:56:30.553690 2057789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 09:56:30.553739 2057789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-561540
	I0804 09:56:30.571029 2057789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/auto-561540/id_rsa Username:docker}
	I0804 09:56:30.662079 2057789 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 09:56:30.665091 2057789 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 09:56:30.665120 2057789 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 09:56:30.665137 2057789 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 09:56:30.665144 2057789 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 09:56:30.665156 2057789 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 09:56:30.665205 2057789 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 09:56:30.665337 2057789 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 09:56:30.665436 2057789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 09:56:30.673411 2057789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 09:56:30.696018 2057789 start.go:296] duration metric: took 142.394762ms for postStartSetup
	I0804 09:56:30.696327 2057789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-561540
	I0804 09:56:30.714703 2057789 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/config.json ...
	I0804 09:56:30.714952 2057789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 09:56:30.714998 2057789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-561540
	I0804 09:56:30.731650 2057789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/auto-561540/id_rsa Username:docker}
	I0804 09:56:30.818023 2057789 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 09:56:30.822078 2057789 start.go:128] duration metric: took 7.857583585s to createHost
	I0804 09:56:30.822102 2057789 start.go:83] releasing machines lock for "auto-561540", held for 7.857732793s
	I0804 09:56:30.822175 2057789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-561540
	I0804 09:56:30.839138 2057789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 09:56:30.839161 2057789 ssh_runner.go:195] Run: cat /version.json
	I0804 09:56:30.839199 2057789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-561540
	I0804 09:56:30.839212 2057789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-561540
	I0804 09:56:30.856582 2057789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/auto-561540/id_rsa Username:docker}
	I0804 09:56:30.856644 2057789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/auto-561540/id_rsa Username:docker}
	I0804 09:56:31.013280 2057789 ssh_runner.go:195] Run: systemctl --version
	I0804 09:56:31.017527 2057789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 09:56:31.021584 2057789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 09:56:31.044180 2057789 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 09:56:31.044243 2057789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 09:56:31.068211 2057789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0804 09:56:31.068239 2057789 start.go:495] detecting cgroup driver to use...
	I0804 09:56:31.068272 2057789 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 09:56:31.068389 2057789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 09:56:31.083166 2057789 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm.sha256
	I0804 09:56:31.465359 2057789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 09:56:31.475593 2057789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 09:56:31.484922 2057789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 09:56:31.484990 2057789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 09:56:31.494809 2057789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 09:56:31.503560 2057789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 09:56:31.511950 2057789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 09:56:31.520667 2057789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 09:56:31.528733 2057789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 09:56:31.537727 2057789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 09:56:31.546615 2057789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 09:56:31.555943 2057789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 09:56:31.563656 2057789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 09:56:31.571521 2057789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:56:31.647188 2057789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 09:56:31.732705 2057789 start.go:495] detecting cgroup driver to use...
	I0804 09:56:31.732763 2057789 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 09:56:31.732817 2057789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 09:56:31.744624 2057789 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 09:56:31.744687 2057789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 09:56:31.756226 2057789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 09:56:31.772669 2057789 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm.sha256
	I0804 09:56:32.168714 2057789 ssh_runner.go:195] Run: which cri-dockerd
	I0804 09:56:32.172509 2057789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 09:56:32.181219 2057789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 09:56:32.197221 2057789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 09:56:32.272818 2057789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 09:56:32.354854 2057789 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 09:56:32.354981 2057789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 09:56:32.371403 2057789 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 09:56:32.381137 2057789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:56:32.454333 2057789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 09:56:32.742282 2057789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 09:56:32.753621 2057789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 09:56:32.764461 2057789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 09:56:32.847338 2057789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 09:56:32.922720 2057789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:56:33.003307 2057789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 09:56:33.015551 2057789 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 09:56:33.025109 2057789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:56:33.104734 2057789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 09:56:33.166100 2057789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 09:56:33.176646 2057789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 09:56:33.176709 2057789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 09:56:33.180046 2057789 start.go:563] Will wait 60s for crictl version
	I0804 09:56:33.180088 2057789 ssh_runner.go:195] Run: which crictl
	I0804 09:56:33.183122 2057789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 09:56:33.214659 2057789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 09:56:33.214713 2057789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 09:56:33.237660 2057789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 09:56:33.261472 2057789 out.go:235] * Preparing Kubernetes v1.33.3 on Docker 28.3.3 ...
	I0804 09:56:33.261553 2057789 cli_runner.go:164] Run: docker network inspect auto-561540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 09:56:33.278012 2057789 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0804 09:56:33.281486 2057789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 09:56:33.291990 2057789 kubeadm.go:875] updating cluster {Name:auto-561540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.3 ClusterName:auto-561540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.33.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 09:56:33.292211 2057789 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm.sha256
	I0804 09:56:33.704558 2057789 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm.sha256
	I0804 09:56:34.081980 2057789 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm.sha256
	I0804 09:56:34.491185 2057789 preload.go:131] Checking if preload exists for k8s version v1.33.3 and runtime docker
	I0804 09:56:34.491342 2057789 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm.sha256
	I0804 09:56:34.872078 2057789 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm.sha256
	I0804 09:56:35.251461 2057789 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm.sha256
	I0804 09:56:35.631724 2057789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 09:56:35.651769 2057789 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.33.3
	registry.k8s.io/kube-proxy:v1.33.3
	registry.k8s.io/kube-controller-manager:v1.33.3
	registry.k8s.io/kube-scheduler:v1.33.3
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 09:56:35.651790 2057789 docker.go:633] Images already preloaded, skipping extraction
	I0804 09:56:35.651845 2057789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 09:56:35.670834 2057789 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.33.3
	registry.k8s.io/kube-proxy:v1.33.3
	registry.k8s.io/kube-controller-manager:v1.33.3
	registry.k8s.io/kube-scheduler:v1.33.3
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 09:56:35.670858 2057789 cache_images.go:85] Images are preloaded, skipping loading
	I0804 09:56:35.670871 2057789 kubeadm.go:926] updating node { 192.168.103.2 8443 v1.33.3 docker true true} ...
	I0804 09:56:35.670974 2057789 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-561540 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.3 ClusterName:auto-561540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 09:56:35.671037 2057789 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 09:56:35.718375 2057789 cni.go:84] Creating CNI manager for ""
	I0804 09:56:35.718407 2057789 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:56:35.718422 2057789 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 09:56:35.718456 2057789 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.33.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-561540 NodeName:auto-561540 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 09:56:35.718666 2057789 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "auto-561540"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 09:56:35.718751 2057789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.3
	I0804 09:56:35.727177 2057789 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 09:56:35.727242 2057789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 09:56:35.735859 2057789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0804 09:56:35.752232 2057789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 09:56:35.768818 2057789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0804 09:56:35.784948 2057789 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0804 09:56:35.787924 2057789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 09:56:35.797618 2057789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:56:35.876001 2057789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 09:56:35.888425 2057789 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540 for IP: 192.168.103.2
	I0804 09:56:35.888447 2057789 certs.go:194] generating shared ca certs ...
	I0804 09:56:35.888465 2057789 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:56:35.888629 2057789 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 09:56:35.888673 2057789 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 09:56:35.888686 2057789 certs.go:256] generating profile certs ...
	I0804 09:56:35.888739 2057789 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.key
	I0804 09:56:35.888753 2057789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt with IP's: []
	I0804 09:56:36.080277 2057789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt ...
	I0804 09:56:36.080310 2057789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: {Name:mkcb51fdb9eda3498cdc0a43279a4a7c9f927f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:56:36.080486 2057789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.key ...
	I0804 09:56:36.080497 2057789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.key: {Name:mkc780e49783a0dadcb664276d58ec3bac28e392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:56:36.080574 2057789 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/apiserver.key.cfa5f251
	I0804 09:56:36.080589 2057789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/apiserver.crt.cfa5f251 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0804 09:56:36.254805 2057789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/apiserver.crt.cfa5f251 ...
	I0804 09:56:36.254834 2057789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/apiserver.crt.cfa5f251: {Name:mke8da14501dc03136d534eb8b32e0d859edcae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:56:36.255010 2057789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/apiserver.key.cfa5f251 ...
	I0804 09:56:36.255023 2057789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/apiserver.key.cfa5f251: {Name:mk26b57dc30a1aca72e473dfb79ede7c2808e518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:56:36.255103 2057789 certs.go:381] copying /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/apiserver.crt.cfa5f251 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/apiserver.crt
	I0804 09:56:36.255178 2057789 certs.go:385] copying /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/apiserver.key.cfa5f251 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/apiserver.key
	I0804 09:56:36.255229 2057789 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/proxy-client.key
	I0804 09:56:36.255243 2057789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/proxy-client.crt with IP's: []
	I0804 09:56:36.522420 2057789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/proxy-client.crt ...
	I0804 09:56:36.522452 2057789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/proxy-client.crt: {Name:mk714dcca635a817f86cac49dfcac2a776449c22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:56:36.522607 2057789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/proxy-client.key ...
	I0804 09:56:36.522617 2057789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/proxy-client.key: {Name:mkd3878b0a8706d8cac29f27d0b5ae429d83d85c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:56:36.522793 2057789 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 09:56:36.522828 2057789 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 09:56:36.522837 2057789 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 09:56:36.522871 2057789 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 09:56:36.522894 2057789 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 09:56:36.522917 2057789 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 09:56:36.522954 2057789 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 09:56:36.523610 2057789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 09:56:36.546484 2057789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 09:56:36.568055 2057789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 09:56:36.589551 2057789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 09:56:36.610321 2057789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0804 09:56:36.632002 2057789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 09:56:36.653086 2057789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 09:56:36.675088 2057789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 09:56:36.697107 2057789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 09:56:36.718277 2057789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 09:56:36.740722 2057789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 09:56:36.764245 2057789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 09:56:36.780630 2057789 ssh_runner.go:195] Run: openssl version
	I0804 09:56:36.785895 2057789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 09:56:36.794111 2057789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 09:56:36.797158 2057789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 09:56:36.797209 2057789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 09:56:36.803143 2057789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 09:56:36.811227 2057789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 09:56:36.819453 2057789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:56:36.822474 2057789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:56:36.822522 2057789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:56:36.828483 2057789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 09:56:36.836508 2057789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 09:56:36.844760 2057789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 09:56:36.847760 2057789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 09:56:36.847794 2057789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 09:56:36.853750 2057789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 09:56:36.862317 2057789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 09:56:36.865200 2057789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 09:56:36.865272 2057789 kubeadm.go:392] StartCluster: {Name:auto-561540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.3 ClusterName:auto-561540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.33.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:56:36.865390 2057789 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 09:56:36.883341 2057789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 09:56:36.891706 2057789 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 09:56:36.899824 2057789 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0804 09:56:36.899864 2057789 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 09:56:36.907912 2057789 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 09:56:36.907943 2057789 kubeadm.go:157] found existing configuration files:
	
	I0804 09:56:36.907980 2057789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 09:56:36.915782 2057789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 09:56:36.915822 2057789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 09:56:36.923395 2057789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 09:56:36.931430 2057789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 09:56:36.931482 2057789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 09:56:36.938829 2057789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 09:56:36.946394 2057789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 09:56:36.946444 2057789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 09:56:36.953663 2057789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 09:56:36.961087 2057789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 09:56:36.961152 2057789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 09:56:36.968390 2057789 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0804 09:56:37.002668 2057789 kubeadm.go:310] [init] Using Kubernetes version: v1.33.3
	I0804 09:56:37.002767 2057789 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 09:56:37.021774 2057789 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0804 09:56:37.021878 2057789 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0804 09:56:37.021933 2057789 kubeadm.go:310] OS: Linux
	I0804 09:56:37.022021 2057789 kubeadm.go:310] CGROUPS_CPU: enabled
	I0804 09:56:37.022115 2057789 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0804 09:56:37.022230 2057789 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0804 09:56:37.022352 2057789 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0804 09:56:37.022457 2057789 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0804 09:56:37.022554 2057789 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0804 09:56:37.022617 2057789 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0804 09:56:37.022709 2057789 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0804 09:56:37.022798 2057789 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0804 09:56:37.071987 2057789 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 09:56:37.072079 2057789 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 09:56:37.072156 2057789 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 09:56:37.083116 2057789 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 09:56:37.085901 2057789 out.go:235]   - Generating certificates and keys ...
	I0804 09:56:37.086008 2057789 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 09:56:37.086095 2057789 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 09:56:37.264171 2057789 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0804 09:56:37.591544 2057789 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0804 09:56:37.726930 2057789 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0804 09:56:37.879543 2057789 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0804 09:56:38.032955 2057789 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0804 09:56:38.033170 2057789 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-561540 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0804 09:56:38.323623 2057789 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0804 09:56:38.323772 2057789 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-561540 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0804 09:56:38.472876 2057789 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0804 09:56:38.646434 2057789 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0804 09:56:38.773623 2057789 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0804 09:56:38.773742 2057789 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 09:56:38.930993 2057789 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 09:56:39.350466 2057789 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 09:56:39.437052 2057789 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 09:56:39.997208 2057789 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 09:56:40.135537 2057789 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 09:56:40.135980 2057789 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 09:56:40.138136 2057789 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 09:56:40.140151 2057789 out.go:235]   - Booting up control plane ...
	I0804 09:56:40.140285 2057789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 09:56:40.140413 2057789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 09:56:40.141666 2057789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 09:56:40.150813 2057789 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 09:56:40.156057 2057789 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 09:56:40.156143 2057789 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 09:56:40.242147 2057789 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 09:56:40.242273 2057789 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 09:56:41.243375 2057789 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001261239s
	I0804 09:56:41.246049 2057789 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0804 09:56:41.246147 2057789 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I0804 09:56:41.246253 2057789 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0804 09:56:41.246389 2057789 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0804 09:56:43.300889 2057789 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.054756154s
	I0804 09:56:44.776557 2057789 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.530511616s
	I0804 09:56:46.247557 2057789 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.001468559s
	I0804 09:56:46.258365 2057789 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0804 09:56:46.267503 2057789 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0804 09:56:46.281750 2057789 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0804 09:56:46.282013 2057789 kubeadm.go:310] [mark-control-plane] Marking the node auto-561540 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0804 09:56:46.288023 2057789 kubeadm.go:310] [bootstrap-token] Using token: sj2mqy.qqs4jzzp3qsyfaid
	I0804 09:56:46.289169 2057789 out.go:235]   - Configuring RBAC rules ...
	I0804 09:56:46.289330 2057789 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0804 09:56:46.293004 2057789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0804 09:56:46.297548 2057789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0804 09:56:46.300455 2057789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0804 09:56:46.302569 2057789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0804 09:56:46.304755 2057789 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0804 09:56:46.653037 2057789 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0804 09:56:47.087081 2057789 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0804 09:56:47.654279 2057789 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0804 09:56:47.655119 2057789 kubeadm.go:310] 
	I0804 09:56:47.655226 2057789 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0804 09:56:47.655251 2057789 kubeadm.go:310] 
	I0804 09:56:47.655342 2057789 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0804 09:56:47.655356 2057789 kubeadm.go:310] 
	I0804 09:56:47.655400 2057789 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0804 09:56:47.655476 2057789 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0804 09:56:47.655549 2057789 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0804 09:56:47.655559 2057789 kubeadm.go:310] 
	I0804 09:56:47.655628 2057789 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0804 09:56:47.655641 2057789 kubeadm.go:310] 
	I0804 09:56:47.655705 2057789 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0804 09:56:47.655734 2057789 kubeadm.go:310] 
	I0804 09:56:47.655793 2057789 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0804 09:56:47.655914 2057789 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0804 09:56:47.656012 2057789 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0804 09:56:47.656023 2057789 kubeadm.go:310] 
	I0804 09:56:47.656139 2057789 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0804 09:56:47.656243 2057789 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0804 09:56:47.656257 2057789 kubeadm.go:310] 
	I0804 09:56:47.656365 2057789 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sj2mqy.qqs4jzzp3qsyfaid \
	I0804 09:56:47.656508 2057789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:70f03f17b288c8210e40c636040410bb362871829ad87750de18dee85db81feb \
	I0804 09:56:47.656538 2057789 kubeadm.go:310] 	--control-plane 
	I0804 09:56:47.656547 2057789 kubeadm.go:310] 
	I0804 09:56:47.656663 2057789 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0804 09:56:47.656672 2057789 kubeadm.go:310] 
	I0804 09:56:47.656778 2057789 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sj2mqy.qqs4jzzp3qsyfaid \
	I0804 09:56:47.656915 2057789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:70f03f17b288c8210e40c636040410bb362871829ad87750de18dee85db81feb 
	I0804 09:56:47.659297 2057789 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0804 09:56:47.659489 2057789 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0804 09:56:47.659659 2057789 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 09:56:47.659698 2057789 cni.go:84] Creating CNI manager for ""
	I0804 09:56:47.659720 2057789 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:56:47.661612 2057789 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 09:56:47.662634 2057789 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 09:56:47.671461 2057789 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 09:56:47.688050 2057789 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 09:56:47.688107 2057789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 09:56:47.688160 2057789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-561540 minikube.k8s.io/updated_at=2025_08_04T09_56_47_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=1ecf6d13fa33984297e9c71c3dd9573295ebf42b minikube.k8s.io/name=auto-561540 minikube.k8s.io/primary=true
	I0804 09:56:47.767520 2057789 ops.go:34] apiserver oom_adj: -16
	I0804 09:56:47.767526 2057789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 09:56:48.268138 2057789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 09:56:48.768476 2057789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 09:56:49.267752 2057789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 09:56:49.768399 2057789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 09:56:50.268117 2057789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 09:56:50.768347 2057789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 09:56:51.268166 2057789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 09:56:51.768498 2057789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 09:56:51.835765 2057789 kubeadm.go:1105] duration metric: took 4.147713738s to wait for elevateKubeSystemPrivileges
	I0804 09:56:51.835797 2057789 kubeadm.go:394] duration metric: took 14.970532462s to StartCluster
	I0804 09:56:51.835817 2057789 settings.go:142] acquiring lock: {Name:mk3d97f9903fe59355ed92bb92489c9b9834574a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:56:51.835893 2057789 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 09:56:51.836826 2057789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:56:51.837069 2057789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0804 09:56:51.837081 2057789 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.33.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 09:56:51.837157 2057789 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 09:56:51.837316 2057789 addons.go:69] Setting storage-provisioner=true in profile "auto-561540"
	I0804 09:56:51.837325 2057789 config.go:182] Loaded profile config "auto-561540": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
	I0804 09:56:51.837337 2057789 addons.go:238] Setting addon storage-provisioner=true in "auto-561540"
	I0804 09:56:51.837340 2057789 addons.go:69] Setting default-storageclass=true in profile "auto-561540"
	I0804 09:56:51.837367 2057789 host.go:66] Checking if "auto-561540" exists ...
	I0804 09:56:51.837375 2057789 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-561540"
	I0804 09:56:51.837779 2057789 cli_runner.go:164] Run: docker container inspect auto-561540 --format={{.State.Status}}
	I0804 09:56:51.837958 2057789 cli_runner.go:164] Run: docker container inspect auto-561540 --format={{.State.Status}}
	I0804 09:56:51.838770 2057789 out.go:177] * Verifying Kubernetes components...
	I0804 09:56:51.839884 2057789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:56:51.858983 2057789 addons.go:238] Setting addon default-storageclass=true in "auto-561540"
	I0804 09:56:51.859021 2057789 host.go:66] Checking if "auto-561540" exists ...
	I0804 09:56:51.859329 2057789 cli_runner.go:164] Run: docker container inspect auto-561540 --format={{.State.Status}}
	I0804 09:56:51.862451 2057789 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 09:56:51.863320 2057789 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 09:56:51.863337 2057789 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 09:56:51.863383 2057789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-561540
	I0804 09:56:51.882817 2057789 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 09:56:51.882844 2057789 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 09:56:51.882901 2057789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-561540
	I0804 09:56:51.889376 2057789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/auto-561540/id_rsa Username:docker}
	I0804 09:56:51.903032 2057789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/auto-561540/id_rsa Username:docker}
	I0804 09:56:51.970631 2057789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0804 09:56:51.999933 2057789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 09:56:52.085790 2057789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 09:56:52.086441 2057789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 09:56:52.471843 2057789 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0804 09:56:52.472955 2057789 node_ready.go:35] waiting up to 15m0s for node "auto-561540" to be "Ready" ...
	I0804 09:56:52.482241 2057789 node_ready.go:49] node "auto-561540" is "Ready"
	I0804 09:56:52.482269 2057789 node_ready.go:38] duration metric: took 9.286175ms for node "auto-561540" to be "Ready" ...
	I0804 09:56:52.482290 2057789 api_server.go:52] waiting for apiserver process to appear ...
	I0804 09:56:52.482343 2057789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:56:52.778963 2057789 api_server.go:72] duration metric: took 941.845385ms to wait for apiserver process to appear ...
	I0804 09:56:52.778993 2057789 api_server.go:88] waiting for apiserver healthz status ...
	I0804 09:56:52.779015 2057789 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0804 09:56:52.785746 2057789 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0804 09:56:52.786581 2057789 api_server.go:141] control plane version: v1.33.3
	I0804 09:56:52.786605 2057789 api_server.go:131] duration metric: took 7.605896ms to wait for apiserver health ...
	I0804 09:56:52.786613 2057789 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 09:56:52.793027 2057789 system_pods.go:59] 8 kube-system pods found
	I0804 09:56:52.793063 2057789 system_pods.go:61] "coredns-674b8bbfcf-7mf9b" [30a7ac58-4b18-4df7-a6d4-977399a62ac2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 09:56:52.793074 2057789 system_pods.go:61] "coredns-674b8bbfcf-p4qcs" [589eb69f-00b0-4096-8aa8-46b37da10b60] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 09:56:52.793104 2057789 system_pods.go:61] "etcd-auto-561540" [27461764-599f-4539-946d-43107ca6bc4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 09:56:52.793117 2057789 system_pods.go:61] "kube-apiserver-auto-561540" [933d4344-eded-46f5-ae22-7517a8d9e728] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 09:56:52.793131 2057789 system_pods.go:61] "kube-controller-manager-auto-561540" [cb64d9e0-9f20-4252-bc3d-9a9d621476d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 09:56:52.793142 2057789 system_pods.go:61] "kube-proxy-k5826" [8c8dd6c6-7ded-43bf-a9fb-2d32c65c936a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0804 09:56:52.793153 2057789 system_pods.go:61] "kube-scheduler-auto-561540" [98e8ea2e-80b5-42e1-b011-32ef7fb08a48] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 09:56:52.793165 2057789 system_pods.go:61] "storage-provisioner" [2920c3fb-5edf-434b-95c9-3cb930257272] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0804 09:56:52.793173 2057789 system_pods.go:74] duration metric: took 6.553917ms to wait for pod list to return data ...
	I0804 09:56:52.793184 2057789 default_sa.go:34] waiting for default service account to be created ...
	I0804 09:56:52.793748 2057789 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0804 09:56:52.794736 2057789 addons.go:514] duration metric: took 957.575463ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0804 09:56:52.795161 2057789 default_sa.go:45] found service account: "default"
	I0804 09:56:52.795181 2057789 default_sa.go:55] duration metric: took 1.990343ms for default service account to be created ...
	I0804 09:56:52.795191 2057789 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 09:56:52.797369 2057789 system_pods.go:86] 8 kube-system pods found
	I0804 09:56:52.797401 2057789 system_pods.go:89] "coredns-674b8bbfcf-7mf9b" [30a7ac58-4b18-4df7-a6d4-977399a62ac2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 09:56:52.797410 2057789 system_pods.go:89] "coredns-674b8bbfcf-p4qcs" [589eb69f-00b0-4096-8aa8-46b37da10b60] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 09:56:52.797419 2057789 system_pods.go:89] "etcd-auto-561540" [27461764-599f-4539-946d-43107ca6bc4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 09:56:52.797430 2057789 system_pods.go:89] "kube-apiserver-auto-561540" [933d4344-eded-46f5-ae22-7517a8d9e728] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 09:56:52.797443 2057789 system_pods.go:89] "kube-controller-manager-auto-561540" [cb64d9e0-9f20-4252-bc3d-9a9d621476d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 09:56:52.797453 2057789 system_pods.go:89] "kube-proxy-k5826" [8c8dd6c6-7ded-43bf-a9fb-2d32c65c936a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0804 09:56:52.797463 2057789 system_pods.go:89] "kube-scheduler-auto-561540" [98e8ea2e-80b5-42e1-b011-32ef7fb08a48] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 09:56:52.797473 2057789 system_pods.go:89] "storage-provisioner" [2920c3fb-5edf-434b-95c9-3cb930257272] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0804 09:56:52.797501 2057789 retry.go:31] will retry after 227.0878ms: missing components: kube-dns, kube-proxy
	I0804 09:56:52.975896 2057789 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-561540" context rescaled to 1 replicas
	I0804 09:56:53.028802 2057789 system_pods.go:86] 8 kube-system pods found
	I0804 09:56:53.028845 2057789 system_pods.go:89] "coredns-674b8bbfcf-7mf9b" [30a7ac58-4b18-4df7-a6d4-977399a62ac2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 09:56:53.028856 2057789 system_pods.go:89] "coredns-674b8bbfcf-p4qcs" [589eb69f-00b0-4096-8aa8-46b37da10b60] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 09:56:53.028866 2057789 system_pods.go:89] "etcd-auto-561540" [27461764-599f-4539-946d-43107ca6bc4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 09:56:53.028875 2057789 system_pods.go:89] "kube-apiserver-auto-561540" [933d4344-eded-46f5-ae22-7517a8d9e728] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 09:56:53.028885 2057789 system_pods.go:89] "kube-controller-manager-auto-561540" [cb64d9e0-9f20-4252-bc3d-9a9d621476d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 09:56:53.028898 2057789 system_pods.go:89] "kube-proxy-k5826" [8c8dd6c6-7ded-43bf-a9fb-2d32c65c936a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0804 09:56:53.028905 2057789 system_pods.go:89] "kube-scheduler-auto-561540" [98e8ea2e-80b5-42e1-b011-32ef7fb08a48] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 09:56:53.028917 2057789 system_pods.go:89] "storage-provisioner" [2920c3fb-5edf-434b-95c9-3cb930257272] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0804 09:56:53.028938 2057789 retry.go:31] will retry after 352.483828ms: missing components: kube-dns, kube-proxy
	I0804 09:56:53.386647 2057789 system_pods.go:86] 8 kube-system pods found
	I0804 09:56:53.386687 2057789 system_pods.go:89] "coredns-674b8bbfcf-7mf9b" [30a7ac58-4b18-4df7-a6d4-977399a62ac2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 09:56:53.386698 2057789 system_pods.go:89] "coredns-674b8bbfcf-p4qcs" [589eb69f-00b0-4096-8aa8-46b37da10b60] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 09:56:53.386706 2057789 system_pods.go:89] "etcd-auto-561540" [27461764-599f-4539-946d-43107ca6bc4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 09:56:53.386716 2057789 system_pods.go:89] "kube-apiserver-auto-561540" [933d4344-eded-46f5-ae22-7517a8d9e728] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 09:56:53.386725 2057789 system_pods.go:89] "kube-controller-manager-auto-561540" [cb64d9e0-9f20-4252-bc3d-9a9d621476d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 09:56:53.386732 2057789 system_pods.go:89] "kube-proxy-k5826" [8c8dd6c6-7ded-43bf-a9fb-2d32c65c936a] Running
	I0804 09:56:53.386742 2057789 system_pods.go:89] "kube-scheduler-auto-561540" [98e8ea2e-80b5-42e1-b011-32ef7fb08a48] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 09:56:53.386757 2057789 system_pods.go:89] "storage-provisioner" [2920c3fb-5edf-434b-95c9-3cb930257272] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0804 09:56:53.386769 2057789 system_pods.go:126] duration metric: took 591.56957ms to wait for k8s-apps to be running ...
	I0804 09:56:53.386783 2057789 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 09:56:53.386834 2057789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 09:56:53.400291 2057789 system_svc.go:56] duration metric: took 13.499398ms WaitForService to wait for kubelet
	I0804 09:56:53.400319 2057789 kubeadm.go:578] duration metric: took 1.5632098s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 09:56:53.400339 2057789 node_conditions.go:102] verifying NodePressure condition ...
	I0804 09:56:53.403463 2057789 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0804 09:56:53.403536 2057789 node_conditions.go:123] node cpu capacity is 8
	I0804 09:56:53.403567 2057789 node_conditions.go:105] duration metric: took 3.22138ms to run NodePressure ...
	I0804 09:56:53.403583 2057789 start.go:241] waiting for startup goroutines ...
	I0804 09:56:53.403602 2057789 start.go:246] waiting for cluster config update ...
	I0804 09:56:53.403623 2057789 start.go:255] writing updated cluster config ...
	I0804 09:56:53.403874 2057789 ssh_runner.go:195] Run: rm -f paused
	I0804 09:56:53.407877 2057789 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0804 09:56:53.411769 2057789 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-7mf9b" in "kube-system" namespace to be "Ready" or be gone ...
	W0804 09:56:55.416675 2057789 pod_ready.go:104] pod "coredns-674b8bbfcf-7mf9b" is not "Ready", error: <nil>
	W0804 09:56:57.416732 2057789 pod_ready.go:104] pod "coredns-674b8bbfcf-7mf9b" is not "Ready", error: <nil>
	W0804 09:56:59.417043 2057789 pod_ready.go:104] pod "coredns-674b8bbfcf-7mf9b" is not "Ready", error: <nil>
	W0804 09:57:01.417358 2057789 pod_ready.go:104] pod "coredns-674b8bbfcf-7mf9b" is not "Ready", error: <nil>
	W0804 09:57:03.917059 2057789 pod_ready.go:104] pod "coredns-674b8bbfcf-7mf9b" is not "Ready", error: <nil>
	W0804 09:57:06.416227 2057789 pod_ready.go:104] pod "coredns-674b8bbfcf-7mf9b" is not "Ready", error: <nil>
	W0804 09:57:08.416707 2057789 pod_ready.go:104] pod "coredns-674b8bbfcf-7mf9b" is not "Ready", error: <nil>
	W0804 09:57:10.417351 2057789 pod_ready.go:104] pod "coredns-674b8bbfcf-7mf9b" is not "Ready", error: <nil>
	W0804 09:57:12.917000 2057789 pod_ready.go:104] pod "coredns-674b8bbfcf-7mf9b" is not "Ready", error: <nil>
	W0804 09:57:14.917503 2057789 pod_ready.go:104] pod "coredns-674b8bbfcf-7mf9b" is not "Ready", error: <nil>
	W0804 09:57:17.416738 2057789 pod_ready.go:104] pod "coredns-674b8bbfcf-7mf9b" is not "Ready", error: <nil>
	W0804 09:57:19.417041 2057789 pod_ready.go:104] pod "coredns-674b8bbfcf-7mf9b" is not "Ready", error: <nil>
	W0804 09:57:21.916335 2057789 pod_ready.go:104] pod "coredns-674b8bbfcf-7mf9b" is not "Ready", error: <nil>
	W0804 09:57:23.916610 2057789 pod_ready.go:104] pod "coredns-674b8bbfcf-7mf9b" is not "Ready", error: <nil>
	W0804 09:57:25.916938 2057789 pod_ready.go:104] pod "coredns-674b8bbfcf-7mf9b" is not "Ready", error: <nil>
	W0804 09:57:28.417654 2057789 pod_ready.go:104] pod "coredns-674b8bbfcf-7mf9b" is not "Ready", error: <nil>
	I0804 09:57:30.916473 2057789 pod_ready.go:94] pod "coredns-674b8bbfcf-7mf9b" is "Ready"
	I0804 09:57:30.916499 2057789 pod_ready.go:86] duration metric: took 37.504706091s for pod "coredns-674b8bbfcf-7mf9b" in "kube-system" namespace to be "Ready" or be gone ...
	I0804 09:57:30.916511 2057789 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-p4qcs" in "kube-system" namespace to be "Ready" or be gone ...
	I0804 09:57:30.918160 2057789 pod_ready.go:99] pod "coredns-674b8bbfcf-p4qcs" in "kube-system" namespace is gone: getting pod "coredns-674b8bbfcf-p4qcs" in "kube-system" namespace (will retry): pods "coredns-674b8bbfcf-p4qcs" not found
	I0804 09:57:30.918181 2057789 pod_ready.go:86] duration metric: took 1.664366ms for pod "coredns-674b8bbfcf-p4qcs" in "kube-system" namespace to be "Ready" or be gone ...
	I0804 09:57:30.920269 2057789 pod_ready.go:83] waiting for pod "etcd-auto-561540" in "kube-system" namespace to be "Ready" or be gone ...
	I0804 09:57:30.923317 2057789 pod_ready.go:94] pod "etcd-auto-561540" is "Ready"
	I0804 09:57:30.923336 2057789 pod_ready.go:86] duration metric: took 3.046678ms for pod "etcd-auto-561540" in "kube-system" namespace to be "Ready" or be gone ...
	I0804 09:57:30.925040 2057789 pod_ready.go:83] waiting for pod "kube-apiserver-auto-561540" in "kube-system" namespace to be "Ready" or be gone ...
	I0804 09:57:30.928148 2057789 pod_ready.go:94] pod "kube-apiserver-auto-561540" is "Ready"
	I0804 09:57:30.928169 2057789 pod_ready.go:86] duration metric: took 3.110139ms for pod "kube-apiserver-auto-561540" in "kube-system" namespace to be "Ready" or be gone ...
	I0804 09:57:30.929712 2057789 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-561540" in "kube-system" namespace to be "Ready" or be gone ...
	I0804 09:57:31.315148 2057789 pod_ready.go:94] pod "kube-controller-manager-auto-561540" is "Ready"
	I0804 09:57:31.315175 2057789 pod_ready.go:86] duration metric: took 385.444474ms for pod "kube-controller-manager-auto-561540" in "kube-system" namespace to be "Ready" or be gone ...
	I0804 09:57:31.514934 2057789 pod_ready.go:83] waiting for pod "kube-proxy-k5826" in "kube-system" namespace to be "Ready" or be gone ...
	I0804 09:57:31.915392 2057789 pod_ready.go:94] pod "kube-proxy-k5826" is "Ready"
	I0804 09:57:31.915420 2057789 pod_ready.go:86] duration metric: took 400.458987ms for pod "kube-proxy-k5826" in "kube-system" namespace to be "Ready" or be gone ...
	I0804 09:57:32.115798 2057789 pod_ready.go:83] waiting for pod "kube-scheduler-auto-561540" in "kube-system" namespace to be "Ready" or be gone ...
	I0804 09:57:32.515138 2057789 pod_ready.go:94] pod "kube-scheduler-auto-561540" is "Ready"
	I0804 09:57:32.515165 2057789 pod_ready.go:86] duration metric: took 399.336384ms for pod "kube-scheduler-auto-561540" in "kube-system" namespace to be "Ready" or be gone ...
	I0804 09:57:32.515176 2057789 pod_ready.go:40] duration metric: took 39.107269022s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0804 09:57:32.557502 2057789 start.go:617] kubectl: 1.33.2, cluster: 1.33.3 (minor skew: 0)
	I0804 09:57:32.559043 2057789 out.go:177] * Done! kubectl is now configured to use "auto-561540" cluster and "default" namespace by default
	I0804 09:57:50.258904 1914687 kubeadm.go:310] [control-plane-check] kube-apiserver is not healthy after 4m0.000987094s
	I0804 09:57:50.258951 1914687 kubeadm.go:310] 
	I0804 09:57:50.259098 1914687 kubeadm.go:310] A control plane component may have crashed or exited when started by the container runtime.
	I0804 09:57:50.259231 1914687 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 09:57:50.259350 1914687 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0804 09:57:50.259468 1914687 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	I0804 09:57:50.259566 1914687 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0804 09:57:50.259701 1914687 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	I0804 09:57:50.259720 1914687 kubeadm.go:310] 
	I0804 09:57:50.262393 1914687 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0804 09:57:50.262641 1914687 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0804 09:57:50.262798 1914687 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 09:57:50.263102 1914687 kubeadm.go:310] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I0804 09:57:50.263213 1914687 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 09:57:50.263311 1914687 kubeadm.go:394] duration metric: took 12m18.61154147s to StartCluster
	I0804 09:57:50.263367 1914687 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 09:57:50.263425 1914687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 09:57:50.302849 1914687 cri.go:89] found id: "df06cba5b9c28ae26a422996b6810d9bf6e1ec9d76bb921f463ec39a4953c8d1"
	I0804 09:57:50.302878 1914687 cri.go:89] found id: ""
	I0804 09:57:50.302888 1914687 logs.go:282] 1 containers: [df06cba5b9c28ae26a422996b6810d9bf6e1ec9d76bb921f463ec39a4953c8d1]
	I0804 09:57:50.302945 1914687 ssh_runner.go:195] Run: which crictl
	I0804 09:57:50.307064 1914687 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 09:57:50.307136 1914687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 09:57:50.340445 1914687 cri.go:89] found id: "db5c13daaf7ce7f0d0d0e95907cdfe123837200d44e9c99eadf311ce4e98e7e6"
	I0804 09:57:50.340467 1914687 cri.go:89] found id: ""
	I0804 09:57:50.340475 1914687 logs.go:282] 1 containers: [db5c13daaf7ce7f0d0d0e95907cdfe123837200d44e9c99eadf311ce4e98e7e6]
	I0804 09:57:50.340515 1914687 ssh_runner.go:195] Run: which crictl
	I0804 09:57:50.343804 1914687 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 09:57:50.343855 1914687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 09:57:50.377703 1914687 cri.go:89] found id: ""
	I0804 09:57:50.377732 1914687 logs.go:282] 0 containers: []
	W0804 09:57:50.377743 1914687 logs.go:284] No container was found matching "coredns"
	I0804 09:57:50.377752 1914687 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 09:57:50.377813 1914687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 09:57:50.413120 1914687 cri.go:89] found id: "85f53e1b115a8cbdcabdd536f25da2b7dd2c2ad63fcf8505995c27e1e7690863"
	I0804 09:57:50.413146 1914687 cri.go:89] found id: ""
	I0804 09:57:50.413155 1914687 logs.go:282] 1 containers: [85f53e1b115a8cbdcabdd536f25da2b7dd2c2ad63fcf8505995c27e1e7690863]
	I0804 09:57:50.413208 1914687 ssh_runner.go:195] Run: which crictl
	I0804 09:57:50.416921 1914687 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 09:57:50.416981 1914687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 09:57:50.457153 1914687 cri.go:89] found id: ""
	I0804 09:57:50.457177 1914687 logs.go:282] 0 containers: []
	W0804 09:57:50.457185 1914687 logs.go:284] No container was found matching "kube-proxy"
	I0804 09:57:50.457190 1914687 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 09:57:50.457273 1914687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 09:57:50.497723 1914687 cri.go:89] found id: "1987f93e651ec629976fe3a2f8d2144200451700819d2a280453744e9c9755ae"
	I0804 09:57:50.497747 1914687 cri.go:89] found id: ""
	I0804 09:57:50.497758 1914687 logs.go:282] 1 containers: [1987f93e651ec629976fe3a2f8d2144200451700819d2a280453744e9c9755ae]
	I0804 09:57:50.497802 1914687 ssh_runner.go:195] Run: which crictl
	I0804 09:57:50.501780 1914687 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 09:57:50.501850 1914687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 09:57:50.539775 1914687 cri.go:89] found id: ""
	I0804 09:57:50.539798 1914687 logs.go:282] 0 containers: []
	W0804 09:57:50.539806 1914687 logs.go:284] No container was found matching "kindnet"
	I0804 09:57:50.539811 1914687 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 09:57:50.539851 1914687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 09:57:50.575765 1914687 cri.go:89] found id: ""
	I0804 09:57:50.575792 1914687 logs.go:282] 0 containers: []
	W0804 09:57:50.575802 1914687 logs.go:284] No container was found matching "storage-provisioner"
	I0804 09:57:50.575824 1914687 logs.go:123] Gathering logs for describe nodes ...
	I0804 09:57:50.575838 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 09:57:50.631767 1914687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 09:57:50.631802 1914687 logs.go:123] Gathering logs for kube-apiserver [df06cba5b9c28ae26a422996b6810d9bf6e1ec9d76bb921f463ec39a4953c8d1] ...
	I0804 09:57:50.631816 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df06cba5b9c28ae26a422996b6810d9bf6e1ec9d76bb921f463ec39a4953c8d1"
	I0804 09:57:50.673833 1914687 logs.go:123] Gathering logs for etcd [db5c13daaf7ce7f0d0d0e95907cdfe123837200d44e9c99eadf311ce4e98e7e6] ...
	I0804 09:57:50.673862 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db5c13daaf7ce7f0d0d0e95907cdfe123837200d44e9c99eadf311ce4e98e7e6"
	I0804 09:57:50.713861 1914687 logs.go:123] Gathering logs for kube-scheduler [85f53e1b115a8cbdcabdd536f25da2b7dd2c2ad63fcf8505995c27e1e7690863] ...
	I0804 09:57:50.713888 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85f53e1b115a8cbdcabdd536f25da2b7dd2c2ad63fcf8505995c27e1e7690863"
	I0804 09:57:50.782670 1914687 logs.go:123] Gathering logs for kube-controller-manager [1987f93e651ec629976fe3a2f8d2144200451700819d2a280453744e9c9755ae] ...
	I0804 09:57:50.782708 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1987f93e651ec629976fe3a2f8d2144200451700819d2a280453744e9c9755ae"
	I0804 09:57:50.821748 1914687 logs.go:123] Gathering logs for kubelet ...
	I0804 09:57:50.821774 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 09:57:50.911276 1914687 logs.go:123] Gathering logs for dmesg ...
	I0804 09:57:50.911313 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 09:57:50.938627 1914687 logs.go:123] Gathering logs for Docker ...
	I0804 09:57:50.938659 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 09:57:50.973015 1914687 logs.go:123] Gathering logs for container status ...
	I0804 09:57:50.973046 1914687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 09:57:51.013375 1914687 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.745199ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.504522425s
	[control-plane-check] kube-scheduler is healthy after 32.866964303s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000987094s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	W0804 09:57:51.013440 1914687 out.go:270] * 
	W0804 09:57:51.013521 1914687 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.745199ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.504522425s
	[control-plane-check] kube-scheduler is healthy after 32.866964303s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000987094s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 09:57:51.013543 1914687 out.go:270] * 
	W0804 09:57:51.015357 1914687 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 09:57:51.019682 1914687 out.go:201] 
	W0804 09:57:51.020752 1914687 out.go:270] X Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.745199ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.504522425s
	[control-plane-check] kube-scheduler is healthy after 32.866964303s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000987094s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 09:57:51.020788 1914687 out.go:270] * 
	W0804 09:57:51.022725 1914687 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 09:57:51.023892 1914687 out.go:201] 
	
	
	==> Docker <==
	Aug 04 09:53:44 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:53:44.945964123Z" level=info msg="ignoring event" container=24b13f7fdeac35628ba399e6aca72932f4c4889561598c3f62975d55eef17c64 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:53:46 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:53:46.727522757Z" level=info msg="ignoring event" container=480189befa30a8558ae438e52cc0431a56dd9152126555061fb3bd7b583a25cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:53:46 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:53:46.800042539Z" level=info msg="ignoring event" container=4c2439438f9d82bb74632fc724d1d707187fcdca9d25241c1519586b38ab9af9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:53:46 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:53:46.880457603Z" level=info msg="ignoring event" container=a4b375782f93bf49bd65c1775afce5869b9547164dc0e4f21540a0a9e4a9deee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:53:46 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:53:46.965914784Z" level=info msg="ignoring event" container=c81385cf57f74824929dc059ed1b14c42d322e9c067d4879ccfd95c2c5367343 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:53:50 kubernetes-upgrade-402519 cri-dockerd[1490]: time="2025-08-04T09:53:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/07fe3e32906993653e3e69dbd1da2cbafc9f5bd899ea15187e8f32948286e94e/resolv.conf as [nameserver 192.168.85.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:53:50 kubernetes-upgrade-402519 cri-dockerd[1490]: time="2025-08-04T09:53:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ff587edc669adcce987cd86b4f0478973e4831d7b9ff4b417519c6c341e04fd1/resolv.conf as [nameserver 192.168.85.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:53:50 kubernetes-upgrade-402519 cri-dockerd[1490]: time="2025-08-04T09:53:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/45feb5717d10668f0440aba7aad307676eca4ef1f2efda8cd7e2a5773a2e5b97/resolv.conf as [nameserver 192.168.85.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:53:50 kubernetes-upgrade-402519 cri-dockerd[1490]: time="2025-08-04T09:53:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eb834cd212ec916fd9f1383ce36c0751cffd85e02dddffbea35470c749ac462b/resolv.conf as [nameserver 192.168.85.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 09:53:51 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:53:51.075317151Z" level=info msg="ignoring event" container=882c15e062ca3f76220ba7923c807297bd210ff55a006861546bd7f71f24a28a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:53:51 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:53:51.577293191Z" level=info msg="ignoring event" container=efe0e6ff2ca88ae970ccd3914747deee4038eb6122368a6356899470170f3177 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:54:01 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:54:01.653880710Z" level=info msg="ignoring event" container=0fbe7ef172aadb25a609e280e5ca128b0926ac5bb3be0bd302182f155ea6224e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:54:12 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:54:12.122934206Z" level=info msg="ignoring event" container=4dc1d02658b1859bd71bf378b8a06853503b453a62863fd1192e530f734b6081 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:54:23 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:54:23.228830652Z" level=info msg="ignoring event" container=18e5db7ec8e19bbda67435f58c9f9cb0b83cf4c28a0cdd57e8d792b0d4030301 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:54:33 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:54:33.139872600Z" level=info msg="ignoring event" container=8169ba4a14b61bf18d7db53a73d00e353ef0413c560f4571d4c9e63244c23654 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:54:33 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:54:33.200653203Z" level=info msg="ignoring event" container=c1bb170cfface3086f3b4eee42dd991000543ee7d26d1d379f252e0458b1ab1f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:54:46 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:54:46.490939494Z" level=info msg="ignoring event" container=b14e127fc0a204ae723881ec8aa1b4370f829bb7262c8ad54048084835449356 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:55:13 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:55:13.223540463Z" level=info msg="ignoring event" container=ad20813b696abfaa915e8f5e4b71e50043e76841a81d4479b8df1e6d294837d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:55:18 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:55:18.033389865Z" level=info msg="ignoring event" container=b46a46d926703d835efbab85f5adce12dda071227b53e3ddf6b42ad927c1c3ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:55:19 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:55:19.056283443Z" level=info msg="ignoring event" container=f93c5500e09e01b7573982f5f22df35b0e0ea530c30ab85daff135b307bc83f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:56:02 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:56:02.749167699Z" level=info msg="ignoring event" container=b685b3213e870290db3864390cc3d35ca8f48efbf8776cb47765f56506a9be0d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:56:08 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:56:08.236458920Z" level=info msg="ignoring event" container=5067fd5ec193346c8c6497e22248dc76e7be4cfa546a6c14c5965de373c9c6c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:56:39 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:56:39.226583758Z" level=info msg="ignoring event" container=db5c13daaf7ce7f0d0d0e95907cdfe123837200d44e9c99eadf311ce4e98e7e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:57:08 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:57:08.689213244Z" level=info msg="ignoring event" container=df06cba5b9c28ae26a422996b6810d9bf6e1ec9d76bb921f463ec39a4953c8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 09:57:19 kubernetes-upgrade-402519 dockerd[1660]: time="2025-08-04T09:57:19.708188415Z" level=info msg="ignoring event" container=1987f93e651ec629976fe3a2f8d2144200451700819d2a280453744e9c9755ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1987f93e651ec       9ad783615e1bc       53 seconds ago       Exited              kube-controller-manager   4                   eb834cd212ec9       kube-controller-manager-kubernetes-upgrade-402519
	df06cba5b9c28       d85eea91cc41d       About a minute ago   Exited              kube-apiserver            4                   45feb5717d106       kube-apiserver-kubernetes-upgrade-402519
	db5c13daaf7ce       1e30c0b1e9b99       About a minute ago   Exited              etcd                      5                   ff587edc669ad       etcd-kubernetes-upgrade-402519
	85f53e1b115a8       21d34a2aeacf5       4 minutes ago        Running             kube-scheduler            0                   07fe3e3290699       kube-scheduler-kubernetes-upgrade-402519
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 54 80 36 bf b7 08 06
	[Aug 4 09:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 56 a6 47 7b 2e 08 06
	[Aug 4 09:52] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev bridge
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba c8 1f 50 14 96 08 06
	[  +0.901518] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7a a7 36 2f 01 7b 08 06
	[ +10.993203] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 5e 05 20 e9 a3 08 06
	[Aug 4 09:53] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 3d 5b 9b 9d da 08 06
	[Aug 4 09:54] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 df 7a 88 cc d2 08 06
	[  +0.013413] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2a 07 89 39 b9 1a 08 06
	[Aug 4 09:55] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 15 d0 1d eb 4c 08 06
	[  +7.731067] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e 6d f0 7d b8 04 08 06
	[Aug 4 09:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 7e 87 da f1 3d 08 06
	[Aug 4 09:57] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 59 6e 18 4f 07 08 06
	[  +0.000520] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 72 7e 87 da f1 3d 08 06
	
	
	==> etcd [db5c13daaf7c] <==
	flag provided but not defined: -proxy-refresh-interval
	Usage:
	
	  etcd [flags]
	    Start an etcd server.
	
	  etcd --version
	    Show the version of etcd.
	
	  etcd -h | --help
	    Show the help information about etcd.
	
	  etcd --config-file
	    Path to the server configuration file. Note that if a configuration file is provided, other command line flags and environment variables will be ignored.
	
	  etcd gateway
	    Run the stateless pass-through etcd TCP connection forwarding proxy.
	
	  etcd grpc-proxy
	    Run the stateless etcd v3 gRPC L7 reverse proxy.
	
	
	
	==> kernel <==
	 09:57:52 up 1 day, 18:39,  0 users,  load average: 0.92, 1.73, 1.81
	Linux kubernetes-upgrade-402519 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [df06cba5b9c2] <==
	W0804 09:56:48.656676       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0804 09:56:48.657047       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 09:56:48.658729       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 09:56:48.665088       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 09:56:48.670438       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 09:56:48.670454       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 09:56:48.670672       1 instance.go:232] Using reconciler: lease
	W0804 09:56:48.671522       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:56:48.671541       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:56:49.657220       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:56:49.657438       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:56:49.672306       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:56:51.202901       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:56:51.218231       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:56:51.573223       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:56:53.920655       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:56:54.016352       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:56:54.149310       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:56:57.446833       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:56:58.516376       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:56:58.774692       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:57:04.026838       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:57:06.118300       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 09:57:06.221916       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 09:57:08.672581       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [1987f93e651e] <==
	I0804 09:56:59.963456       1 serving.go:386] Generated self-signed cert in-memory
	I0804 09:57:00.305580       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 09:57:00.305647       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 09:57:00.308481       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 09:57:00.308487       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 09:57:00.309000       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 09:57:00.309030       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 09:57:19.678486       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.85.2:8443/healthz\": dial tcp 192.168.85.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [85f53e1b115a] <==
	E0804 09:56:35.892899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 09:56:37.173134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.85.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:56:37.194948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.85.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:56:38.014165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.85.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 09:56:39.069307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.85.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:56:43.035566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.85.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:57:02.150140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.85.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:57:02.592884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.85.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:57:02.829014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.85.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 09:57:03.212575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.85.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 09:57:04.601976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.85.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 09:57:09.677896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.85.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.85.2:36266->192.168.85.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 09:57:09.677899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.85.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.85.2:36282->192.168.85.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 09:57:09.677896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.85.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.85.2:47620->192.168.85.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 09:57:09.678188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.85.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.85.2:36278->192.168.85.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 09:57:11.754255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.85.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 09:57:16.438339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.85.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 09:57:22.670688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 09:57:23.764625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.85.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 09:57:32.765616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.85.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 09:57:33.668926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.85.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 09:57:36.735195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.85.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 09:57:37.950439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.85.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 09:57:40.826195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.85.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 09:57:45.023295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.85.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	
	
	==> kubelet <==
	Aug 04 09:57:37 kubernetes-upgrade-402519 kubelet[13585]: I0804 09:57:37.101677   13585 scope.go:117] "RemoveContainer" containerID="db5c13daaf7ce7f0d0d0e95907cdfe123837200d44e9c99eadf311ce4e98e7e6"
	Aug 04 09:57:37 kubernetes-upgrade-402519 kubelet[13585]: E0804 09:57:37.101850   13585 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-kubernetes-upgrade-402519_kube-system(a7ad24b0d4d807bb27862d8da8fb431d)\"" pod="kube-system/etcd-kubernetes-upgrade-402519" podUID="a7ad24b0d4d807bb27862d8da8fb431d"
	Aug 04 09:57:37 kubernetes-upgrade-402519 kubelet[13585]: I0804 09:57:37.229393   13585 kubelet_node_status.go:75] "Attempting to register node" node="kubernetes-upgrade-402519"
	Aug 04 09:57:37 kubernetes-upgrade-402519 kubelet[13585]: E0804 09:57:37.229746   13585 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.85.2:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="kubernetes-upgrade-402519"
	Aug 04 09:57:38 kubernetes-upgrade-402519 kubelet[13585]: E0804 09:57:38.215624   13585 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.85.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-402519?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="7s"
	Aug 04 09:57:39 kubernetes-upgrade-402519 kubelet[13585]: E0804 09:57:39.101899   13585 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-402519\" not found" node="kubernetes-upgrade-402519"
	Aug 04 09:57:39 kubernetes-upgrade-402519 kubelet[13585]: I0804 09:57:39.101993   13585 scope.go:117] "RemoveContainer" containerID="df06cba5b9c28ae26a422996b6810d9bf6e1ec9d76bb921f463ec39a4953c8d1"
	Aug 04 09:57:39 kubernetes-upgrade-402519 kubelet[13585]: E0804 09:57:39.102138   13585 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-402519_kube-system(4fde1e2d3ab3c3d971891cfb4e370913)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-402519" podUID="4fde1e2d3ab3c3d971891cfb4e370913"
	Aug 04 09:57:40 kubernetes-upgrade-402519 kubelet[13585]: E0804 09:57:40.137975   13585 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"kubernetes-upgrade-402519\" not found"
	Aug 04 09:57:44 kubernetes-upgrade-402519 kubelet[13585]: I0804 09:57:44.231541   13585 kubelet_node_status.go:75] "Attempting to register node" node="kubernetes-upgrade-402519"
	Aug 04 09:57:44 kubernetes-upgrade-402519 kubelet[13585]: E0804 09:57:44.231906   13585 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.85.2:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="kubernetes-upgrade-402519"
	Aug 04 09:57:45 kubernetes-upgrade-402519 kubelet[13585]: E0804 09:57:45.216817   13585 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.85.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-402519?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="7s"
	Aug 04 09:57:46 kubernetes-upgrade-402519 kubelet[13585]: E0804 09:57:46.910480   13585 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.85.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.85.2:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-402519.185887ac8f8a060c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-402519,UID:kubernetes-upgrade-402519,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-402519,},FirstTimestamp:2025-08-04 09:53:50.073120268 +0000 UTC m=+0.318790339,LastTimestamp:2025-08-04 09:53:50.073120268 +0000 UTC m=+0.318790339,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Report
ingInstance:kubernetes-upgrade-402519,}"
	Aug 04 09:57:49 kubernetes-upgrade-402519 kubelet[13585]: E0804 09:57:49.102466   13585 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-402519\" not found" node="kubernetes-upgrade-402519"
	Aug 04 09:57:49 kubernetes-upgrade-402519 kubelet[13585]: I0804 09:57:49.102568   13585 scope.go:117] "RemoveContainer" containerID="1987f93e651ec629976fe3a2f8d2144200451700819d2a280453744e9c9755ae"
	Aug 04 09:57:49 kubernetes-upgrade-402519 kubelet[13585]: E0804 09:57:49.102570   13585 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-402519\" not found" node="kubernetes-upgrade-402519"
	Aug 04 09:57:49 kubernetes-upgrade-402519 kubelet[13585]: E0804 09:57:49.102728   13585 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-402519_kube-system(6a44e3762dd52e6583d9e4c7353aff2c)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-402519" podUID="6a44e3762dd52e6583d9e4c7353aff2c"
	Aug 04 09:57:50 kubernetes-upgrade-402519 kubelet[13585]: E0804 09:57:50.101168   13585 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-402519\" not found" node="kubernetes-upgrade-402519"
	Aug 04 09:57:50 kubernetes-upgrade-402519 kubelet[13585]: I0804 09:57:50.101273   13585 scope.go:117] "RemoveContainer" containerID="db5c13daaf7ce7f0d0d0e95907cdfe123837200d44e9c99eadf311ce4e98e7e6"
	Aug 04 09:57:50 kubernetes-upgrade-402519 kubelet[13585]: E0804 09:57:50.101420   13585 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-kubernetes-upgrade-402519_kube-system(a7ad24b0d4d807bb27862d8da8fb431d)\"" pod="kube-system/etcd-kubernetes-upgrade-402519" podUID="a7ad24b0d4d807bb27862d8da8fb431d"
	Aug 04 09:57:50 kubernetes-upgrade-402519 kubelet[13585]: E0804 09:57:50.138688   13585 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"kubernetes-upgrade-402519\" not found"
	Aug 04 09:57:50 kubernetes-upgrade-402519 kubelet[13585]: E0804 09:57:50.344755   13585 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.85.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Aug 04 09:57:51 kubernetes-upgrade-402519 kubelet[13585]: I0804 09:57:51.233977   13585 kubelet_node_status.go:75] "Attempting to register node" node="kubernetes-upgrade-402519"
	Aug 04 09:57:51 kubernetes-upgrade-402519 kubelet[13585]: E0804 09:57:51.234388   13585 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.85.2:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="kubernetes-upgrade-402519"
	Aug 04 09:57:52 kubernetes-upgrade-402519 kubelet[13585]: E0804 09:57:52.217511   13585 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.85.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-402519?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-402519 -n kubernetes-upgrade-402519
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-402519 -n kubernetes-upgrade-402519: exit status 2 (288.645531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-402519" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-402519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-402519
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-402519: (1.956801745s)
--- FAIL: TestKubernetesUpgrade (805.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (523.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-499486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-499486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0: exit status 80 (8m42.723069587s)

                                                
                                                
-- stdout --
	* [no-preload-499486] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-499486" primary control-plane node in "no-preload-499486" cluster
	* Pulling base image v0.0.47-1753871403-21198 ...
	* Creating docker container (CPUs=2, Memory=3072MB) ...* Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...  - Generating certificates and keys ...  - Booting up control plane ...  - Generating certificates and keys ...  - Booting up control plane ...
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 09:53:14.814634 2029364 out.go:345] Setting OutFile to fd 1 ...
	I0804 09:53:14.815543 2029364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:53:14.815554 2029364 out.go:358] Setting ErrFile to fd 2...
	I0804 09:53:14.815558 2029364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:53:14.815768 2029364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 09:53:14.816325 2029364 out.go:352] Setting JSON to false
	I0804 09:53:14.817698 2029364 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":153284,"bootTime":1754147911,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 09:53:14.817796 2029364 start.go:140] virtualization: kvm guest
	I0804 09:53:14.819019 2029364 out.go:177] * [no-preload-499486] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 09:53:14.820242 2029364 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 09:53:14.820296 2029364 notify.go:220] Checking for updates...
	I0804 09:53:14.822054 2029364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 09:53:14.823097 2029364 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 09:53:14.824049 2029364 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 09:53:14.824934 2029364 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 09:53:14.825909 2029364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 09:53:14.827248 2029364 config.go:182] Loaded profile config "cert-expiration-948981": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
	I0804 09:53:14.827380 2029364 config.go:182] Loaded profile config "kubernetes-upgrade-402519": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:53:14.827508 2029364 config.go:182] Loaded profile config "old-k8s-version-304259": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0804 09:53:14.827631 2029364 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 09:53:14.850451 2029364 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 09:53:14.850579 2029364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:53:14.904104 2029364 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-08-04 09:53:14.893794773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:53:14.904214 2029364 docker.go:318] overlay module found
	I0804 09:53:14.905739 2029364 out.go:177] * Using the docker driver based on user configuration
	I0804 09:53:14.906811 2029364 start.go:304] selected driver: docker
	I0804 09:53:14.906832 2029364 start.go:918] validating driver "docker" against <nil>
	I0804 09:53:14.906847 2029364 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 09:53:14.907735 2029364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:53:14.959961 2029364 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-08-04 09:53:14.94982503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:53:14.960116 2029364 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0804 09:53:14.960345 2029364 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 09:53:14.961851 2029364 out.go:177] * Using Docker driver with root privileges
	I0804 09:53:14.962877 2029364 cni.go:84] Creating CNI manager for ""
	I0804 09:53:14.962940 2029364 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:53:14.962950 2029364 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0804 09:53:14.963026 2029364 start.go:348] cluster config:
	{Name:no-preload-499486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:no-preload-499486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:53:14.964090 2029364 out.go:177] * Starting "no-preload-499486" primary control-plane node in "no-preload-499486" cluster
	I0804 09:53:14.965037 2029364 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 09:53:14.966042 2029364 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 09:53:14.966977 2029364 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 09:53:14.967003 2029364 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 09:53:14.967099 2029364 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/config.json ...
	I0804 09:53:14.967138 2029364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/config.json: {Name:mk0474902454b3818eb699d022928ae987abbe6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:53:14.967266 2029364 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:53:14.987960 2029364 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 09:53:14.987990 2029364 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 09:53:14.988006 2029364 cache.go:230] Successfully downloaded all kic artifacts
	I0804 09:53:14.988041 2029364 start.go:360] acquireMachinesLock for no-preload-499486: {Name:mk37c51365b17ced600d568c1425a7f58dbdcfcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 09:53:14.988153 2029364 start.go:364] duration metric: took 94.622µs to acquireMachinesLock for "no-preload-499486"
	I0804 09:53:14.988178 2029364 start.go:93] Provisioning new machine with config: &{Name:no-preload-499486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:no-preload-499486 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 09:53:14.988259 2029364 start.go:125] createHost starting for "" (driver="docker")
	I0804 09:53:14.990106 2029364 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0804 09:53:14.990321 2029364 start.go:159] libmachine.API.Create for "no-preload-499486" (driver="docker")
	I0804 09:53:14.990353 2029364 client.go:168] LocalClient.Create starting
	I0804 09:53:14.990416 2029364 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem
	I0804 09:53:14.990452 2029364 main.go:141] libmachine: Decoding PEM data...
	I0804 09:53:14.990466 2029364 main.go:141] libmachine: Parsing certificate...
	I0804 09:53:14.990533 2029364 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem
	I0804 09:53:14.990564 2029364 main.go:141] libmachine: Decoding PEM data...
	I0804 09:53:14.990575 2029364 main.go:141] libmachine: Parsing certificate...
	I0804 09:53:14.990857 2029364 cli_runner.go:164] Run: docker network inspect no-preload-499486 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0804 09:53:15.011043 2029364 cli_runner.go:211] docker network inspect no-preload-499486 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0804 09:53:15.011205 2029364 network_create.go:284] running [docker network inspect no-preload-499486] to gather additional debugging logs...
	I0804 09:53:15.011246 2029364 cli_runner.go:164] Run: docker network inspect no-preload-499486
	W0804 09:53:15.030722 2029364 cli_runner.go:211] docker network inspect no-preload-499486 returned with exit code 1
	I0804 09:53:15.030755 2029364 network_create.go:287] error running [docker network inspect no-preload-499486]: docker network inspect no-preload-499486: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-499486 not found
	I0804 09:53:15.030779 2029364 network_create.go:289] output of [docker network inspect no-preload-499486]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-499486 not found
	
	** /stderr **
	I0804 09:53:15.030890 2029364 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 09:53:15.048494 2029364 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b4122743d943 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:66:3d:c4:8d:93} reservation:<nil>}
	I0804 09:53:15.049456 2029364 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8451716aa30c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:1d:5b:3c:f6:bd} reservation:<nil>}
	I0804 09:53:15.050332 2029364 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9d42b63aa0b7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3e:9d:f7:36:38:48} reservation:<nil>}
	I0804 09:53:15.051101 2029364 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0989c37a265b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:df:4c:21:8b:f1} reservation:<nil>}
	I0804 09:53:15.052061 2029364 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-7a718837c112 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:36:4c:e3:06:6c:d3} reservation:<nil>}
	I0804 09:53:15.053120 2029364 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f8b580}
	I0804 09:53:15.053152 2029364 network_create.go:124] attempt to create docker network no-preload-499486 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0804 09:53:15.053206 2029364 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-499486 no-preload-499486
	I0804 09:53:15.106483 2029364 network_create.go:108] docker network no-preload-499486 192.168.94.0/24 created
	I0804 09:53:15.106512 2029364 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-499486" container
	I0804 09:53:15.106577 2029364 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0804 09:53:15.125877 2029364 cli_runner.go:164] Run: docker volume create no-preload-499486 --label name.minikube.sigs.k8s.io=no-preload-499486 --label created_by.minikube.sigs.k8s.io=true
	I0804 09:53:15.143281 2029364 oci.go:103] Successfully created a docker volume no-preload-499486
	I0804 09:53:15.143345 2029364 cli_runner.go:164] Run: docker run --rm --name no-preload-499486-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-499486 --entrypoint /usr/bin/test -v no-preload-499486:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d -d /var/lib
	I0804 09:53:15.369994 2029364 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:53:15.589884 2029364 oci.go:107] Successfully prepared a docker volume no-preload-499486
	I0804 09:53:15.589927 2029364 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	W0804 09:53:15.590063 2029364 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0804 09:53:15.590202 2029364 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0804 09:53:15.644065 2029364 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-499486 --name no-preload-499486 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-499486 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-499486 --network no-preload-499486 --ip 192.168.94.2 --volume no-preload-499486:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d
	I0804 09:53:15.775058 2029364 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:53:15.932244 2029364 cli_runner.go:164] Run: docker container inspect no-preload-499486 --format={{.State.Running}}
	I0804 09:53:15.959599 2029364 cli_runner.go:164] Run: docker container inspect no-preload-499486 --format={{.State.Status}}
	I0804 09:53:15.980728 2029364 cli_runner.go:164] Run: docker exec no-preload-499486 stat /var/lib/dpkg/alternatives/iptables
	I0804 09:53:16.028581 2029364 oci.go:144] the created container "no-preload-499486" has a running status.
	I0804 09:53:16.028622 2029364 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/no-preload-499486/id_rsa...
	I0804 09:53:16.213874 2029364 cache.go:107] acquiring lock: {Name:mka423fb18126d40f4a4f7fca8ec6e3e41082638 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 09:53:16.213878 2029364 cache.go:107] acquiring lock: {Name:mkf6bf097f9b4ab85114a6fa38ad13bfc2488603 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 09:53:16.213921 2029364 cache.go:107] acquiring lock: {Name:mk33ce9e689d2e467401f7efa84455ad3f2e92ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 09:53:16.213874 2029364 cache.go:107] acquiring lock: {Name:mkcb7c5aa46ee6392f69a29d6d1585a5e7488cd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 09:53:16.213954 2029364 cache.go:107] acquiring lock: {Name:mkfef881a264b8a3a60f6a6f0c24e47a08186ce5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 09:53:16.213935 2029364 cache.go:107] acquiring lock: {Name:mk9f4291ac7cb8894a58bf7b28674291cc899ed9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 09:53:16.214007 2029364 cache.go:107] acquiring lock: {Name:mk7ddaf4fc877a751da8cfe2ede1952cd2ef0b12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 09:53:16.214066 2029364 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	I0804 09:53:16.213947 2029364 cache.go:107] acquiring lock: {Name:mkfd5a21bbd2e3fa848283c303b92221b810b9b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 09:53:16.214112 2029364 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.34.0-beta.0
	I0804 09:53:16.214128 2029364 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0804 09:53:16.214184 2029364 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	I0804 09:53:16.214212 2029364 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I0804 09:53:16.214228 2029364 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.21-0
	I0804 09:53:16.214355 2029364 cache.go:115] /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0804 09:53:16.214370 2029364 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 511.175µs
	I0804 09:53:16.214382 2029364 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0804 09:53:16.214412 2029364 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	I0804 09:53:16.215875 2029364 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.34.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.0-beta.0
	I0804 09:53:16.216196 2029364 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	I0804 09:53:16.216272 2029364 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I0804 09:53:16.217607 2029364 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	I0804 09:53:16.217675 2029364 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0804 09:53:16.217626 2029364 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.21-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.21-0
	I0804 09:53:16.217625 2029364 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	I0804 09:53:16.388038 2029364 cache.go:162] opening:  /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.21-0
	I0804 09:53:16.434953 2029364 cache.go:162] opening:  /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0-beta.0
	I0804 09:53:16.435149 2029364 cache.go:162] opening:  /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0-beta.0
	I0804 09:53:16.448062 2029364 cache.go:162] opening:  /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I0804 09:53:16.448317 2029364 cache.go:162] opening:  /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0-beta.0
	I0804 09:53:16.449433 2029364 cache.go:162] opening:  /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0-beta.0
	I0804 09:53:16.449437 2029364 cache.go:162] opening:  /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0804 09:53:16.519505 2029364 cache.go:157] /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0804 09:53:16.519531 2029364 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 305.57699ms
	I0804 09:53:16.519548 2029364 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0804 09:53:16.531583 2029364 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/no-preload-499486/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0804 09:53:16.556152 2029364 cli_runner.go:164] Run: docker container inspect no-preload-499486 --format={{.State.Status}}
	I0804 09:53:16.576634 2029364 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0804 09:53:16.576651 2029364 kic_runner.go:114] Args: [docker exec --privileged no-preload-499486 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0804 09:53:16.630368 2029364 cli_runner.go:164] Run: docker container inspect no-preload-499486 --format={{.State.Status}}
	I0804 09:53:16.654658 2029364 machine.go:93] provisionDockerMachine start ...
	I0804 09:53:16.654761 2029364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 09:53:16.672011 2029364 main.go:141] libmachine: Using SSH client type: native
	I0804 09:53:16.672315 2029364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0804 09:53:16.672329 2029364 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 09:53:16.804549 2029364 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-499486
	
	I0804 09:53:16.804583 2029364 ubuntu.go:169] provisioning hostname "no-preload-499486"
	I0804 09:53:16.804649 2029364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 09:53:16.822535 2029364 main.go:141] libmachine: Using SSH client type: native
	I0804 09:53:16.822740 2029364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0804 09:53:16.822752 2029364 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-499486 && echo "no-preload-499486" | sudo tee /etc/hostname
	I0804 09:53:16.961737 2029364 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-499486
	
	I0804 09:53:16.961822 2029364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 09:53:16.984066 2029364 main.go:141] libmachine: Using SSH client type: native
	I0804 09:53:16.984306 2029364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0804 09:53:16.984331 2029364 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-499486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-499486/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-499486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 09:53:17.077386 2029364 cache.go:157] /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0-beta.0 exists
	I0804 09:53:17.077420 2029364 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0-beta.0" -> "/home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0-beta.0" took 863.560118ms
	I0804 09:53:17.077433 2029364 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0-beta.0 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0-beta.0 succeeded
	I0804 09:53:17.114124 2029364 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 09:53:17.114160 2029364 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 09:53:17.114202 2029364 ubuntu.go:177] setting up certificates
	I0804 09:53:17.114216 2029364 provision.go:84] configureAuth start
	I0804 09:53:17.114270 2029364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-499486
	I0804 09:53:17.137943 2029364 provision.go:143] copyHostCerts
	I0804 09:53:17.138010 2029364 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 09:53:17.138023 2029364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 09:53:17.138083 2029364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 09:53:17.138179 2029364 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 09:53:17.138190 2029364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 09:53:17.138222 2029364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 09:53:17.138299 2029364 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 09:53:17.138309 2029364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 09:53:17.138338 2029364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 09:53:17.138412 2029364 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.no-preload-499486 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-499486]
	I0804 09:53:17.382765 2029364 provision.go:177] copyRemoteCerts
	I0804 09:53:17.382815 2029364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 09:53:17.382866 2029364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 09:53:17.401069 2029364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/no-preload-499486/id_rsa Username:docker}
	I0804 09:53:17.492008 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 09:53:17.523933 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 09:53:17.556219 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 09:53:17.584313 2029364 provision.go:87] duration metric: took 470.081632ms to configureAuth
	I0804 09:53:17.584399 2029364 ubuntu.go:193] setting minikube options for container-runtime
	I0804 09:53:17.584582 2029364 config.go:182] Loaded profile config "no-preload-499486": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:53:17.584648 2029364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 09:53:17.607226 2029364 main.go:141] libmachine: Using SSH client type: native
	I0804 09:53:17.607523 2029364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0804 09:53:17.607544 2029364 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 09:53:17.663878 2029364 cache.go:157] /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0-beta.0 exists
	I0804 09:53:17.663922 2029364 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0-beta.0" -> "/home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0-beta.0" took 1.450005383s
	I0804 09:53:17.663941 2029364 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0-beta.0 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0-beta.0 succeeded
	I0804 09:53:17.712982 2029364 cache.go:157] /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0-beta.0 exists
	I0804 09:53:17.713019 2029364 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0-beta.0" -> "/home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0-beta.0" took 1.499051357s
	I0804 09:53:17.713037 2029364 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0-beta.0 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0-beta.0 succeeded
	I0804 09:53:17.734936 2029364 cache.go:157] /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0-beta.0 exists
	I0804 09:53:17.734967 2029364 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0-beta.0" -> "/home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0-beta.0" took 1.521098992s
	I0804 09:53:17.734983 2029364 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0-beta.0 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0-beta.0 succeeded
	I0804 09:53:17.746262 2029364 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 09:53:17.746299 2029364 ubuntu.go:71] root file system type: overlay
	I0804 09:53:17.746414 2029364 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 09:53:17.746491 2029364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 09:53:17.767637 2029364 main.go:141] libmachine: Using SSH client type: native
	I0804 09:53:17.767897 2029364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0804 09:53:17.767991 2029364 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 09:53:17.913399 2029364 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 09:53:17.913486 2029364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 09:53:17.937671 2029364 main.go:141] libmachine: Using SSH client type: native
	I0804 09:53:17.937973 2029364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0804 09:53:17.938008 2029364 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 09:53:17.954360 2029364 cache.go:157] /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0804 09:53:17.954396 2029364 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.740474831s
	I0804 09:53:17.954421 2029364 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0804 09:53:17.961655 2029364 cache.go:157] /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.21-0 exists
	I0804 09:53:17.961679 2029364 cache.go:96] cache image "registry.k8s.io/etcd:3.5.21-0" -> "/home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.21-0" took 1.74775312s
	I0804 09:53:17.961690 2029364 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.21-0 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.21-0 succeeded
	I0804 09:53:17.961708 2029364 cache.go:87] Successfully saved all images to host disk.
	I0804 09:53:19.019650 2029364 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-07-25 11:32:36.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-08-04 09:53:17.907883249 +0000
	@@ -1,38 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	 StartLimitBurst=3
	 StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	+Restart=on-failure
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	 ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0804 09:53:19.019707 2029364 machine.go:96] duration metric: took 2.365027104s to provisionDockerMachine
	I0804 09:53:19.019726 2029364 client.go:171] duration metric: took 4.029363257s to LocalClient.Create
	I0804 09:53:19.019753 2029364 start.go:167] duration metric: took 4.029432402s to libmachine.API.Create "no-preload-499486"
	I0804 09:53:19.019766 2029364 start.go:293] postStartSetup for "no-preload-499486" (driver="docker")
	I0804 09:53:19.019781 2029364 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 09:53:19.019854 2029364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 09:53:19.019900 2029364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 09:53:19.036867 2029364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/no-preload-499486/id_rsa Username:docker}
	I0804 09:53:19.127022 2029364 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 09:53:19.130060 2029364 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 09:53:19.130086 2029364 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 09:53:19.130094 2029364 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 09:53:19.130101 2029364 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 09:53:19.130111 2029364 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 09:53:19.130159 2029364 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 09:53:19.130226 2029364 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 09:53:19.130316 2029364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 09:53:19.138652 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 09:53:19.160494 2029364 start.go:296] duration metric: took 140.714664ms for postStartSetup
	I0804 09:53:19.160861 2029364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-499486
	I0804 09:53:19.178163 2029364 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/config.json ...
	I0804 09:53:19.178390 2029364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 09:53:19.178427 2029364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 09:53:19.194212 2029364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/no-preload-499486/id_rsa Username:docker}
	I0804 09:53:19.286050 2029364 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 09:53:19.290227 2029364 start.go:128] duration metric: took 4.301953747s to createHost
	I0804 09:53:19.290254 2029364 start.go:83] releasing machines lock for "no-preload-499486", held for 4.302089002s
	I0804 09:53:19.290325 2029364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-499486
	I0804 09:53:19.307515 2029364 ssh_runner.go:195] Run: cat /version.json
	I0804 09:53:19.307573 2029364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 09:53:19.307588 2029364 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 09:53:19.307647 2029364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 09:53:19.324937 2029364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/no-preload-499486/id_rsa Username:docker}
	I0804 09:53:19.325068 2029364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/no-preload-499486/id_rsa Username:docker}
	I0804 09:53:19.408463 2029364 ssh_runner.go:195] Run: systemctl --version
	I0804 09:53:19.486169 2029364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 09:53:19.490531 2029364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 09:53:19.514089 2029364 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 09:53:19.514150 2029364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 09:53:19.539788 2029364 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0804 09:53:19.539813 2029364 start.go:495] detecting cgroup driver to use...
	I0804 09:53:19.539844 2029364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 09:53:19.539941 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 09:53:19.554754 2029364 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:53:19.969345 2029364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 09:53:19.979356 2029364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 09:53:19.988510 2029364 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 09:53:19.988571 2029364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 09:53:19.997709 2029364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 09:53:20.006451 2029364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 09:53:20.015285 2029364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 09:53:20.023934 2029364 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 09:53:20.032180 2029364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 09:53:20.041230 2029364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 09:53:20.049966 2029364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 09:53:20.058585 2029364 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 09:53:20.066060 2029364 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 09:53:20.073436 2029364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:53:20.150655 2029364 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 09:53:20.235511 2029364 start.go:495] detecting cgroup driver to use...
	I0804 09:53:20.235569 2029364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 09:53:20.235638 2029364 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 09:53:20.247237 2029364 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 09:53:20.247298 2029364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 09:53:20.258331 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 09:53:20.274220 2029364 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:53:20.665696 2029364 ssh_runner.go:195] Run: which cri-dockerd
	I0804 09:53:20.669338 2029364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 09:53:20.677670 2029364 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 09:53:20.694223 2029364 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 09:53:20.771014 2029364 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 09:53:20.851457 2029364 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 09:53:20.851599 2029364 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 09:53:20.868147 2029364 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 09:53:20.878395 2029364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:53:20.956809 2029364 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 09:53:21.259804 2029364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 09:53:21.271131 2029364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 09:53:21.281491 2029364 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 09:53:21.359508 2029364 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 09:53:21.432833 2029364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:53:21.510197 2029364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 09:53:21.523529 2029364 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 09:53:21.533835 2029364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:53:21.616444 2029364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 09:53:21.675775 2029364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 09:53:21.687325 2029364 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 09:53:21.687389 2029364 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 09:53:21.690646 2029364 start.go:563] Will wait 60s for crictl version
	I0804 09:53:21.690699 2029364 ssh_runner.go:195] Run: which crictl
	I0804 09:53:21.693843 2029364 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 09:53:21.725468 2029364 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 09:53:21.725522 2029364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 09:53:21.750319 2029364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 09:53:21.777844 2029364 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 09:53:21.777919 2029364 cli_runner.go:164] Run: docker network inspect no-preload-499486 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 09:53:21.795378 2029364 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0804 09:53:21.798981 2029364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 09:53:21.809491 2029364 kubeadm.go:875] updating cluster {Name:no-preload-499486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:no-preload-499486 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 09:53:21.809678 2029364 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:53:22.212075 2029364 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:53:22.590777 2029364 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:53:23.009864 2029364 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 09:53:23.009947 2029364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 09:53:23.029409 2029364 docker.go:703] Got preloaded images: 
	I0804 09:53:23.029430 2029364 docker.go:709] registry.k8s.io/kube-apiserver:v1.34.0-beta.0 wasn't preloaded
	I0804 09:53:23.029438 2029364 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.0-beta.0 registry.k8s.io/kube-controller-manager:v1.34.0-beta.0 registry.k8s.io/kube-scheduler:v1.34.0-beta.0 registry.k8s.io/kube-proxy:v1.34.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.21-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0804 09:53:23.030640 2029364 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 09:53:23.031038 2029364 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.34.0-beta.0
	I0804 09:53:23.031228 2029364 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	I0804 09:53:23.031446 2029364 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 09:53:23.031826 2029364 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	I0804 09:53:23.032005 2029364 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	I0804 09:53:23.032328 2029364 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.34.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.0-beta.0
	I0804 09:53:23.032336 2029364 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.21-0
	I0804 09:53:23.032371 2029364 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	I0804 09:53:23.032632 2029364 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0804 09:53:23.032701 2029364 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	I0804 09:53:23.032753 2029364 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	I0804 09:53:23.032891 2029364 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I0804 09:53:23.033088 2029364 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.21-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.21-0
	I0804 09:53:23.033372 2029364 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0804 09:53:23.033621 2029364 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I0804 09:53:23.146050 2029364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.21-0
	I0804 09:53:23.149705 2029364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	I0804 09:53:23.154191 2029364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	I0804 09:53:23.167059 2029364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I0804 09:53:23.167994 2029364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.0-beta.0
	I0804 09:53:23.168815 2029364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	I0804 09:53:23.169336 2029364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0804 09:53:23.169828 2029364 cache_images.go:117] "registry.k8s.io/etcd:3.5.21-0" needs transfer: "registry.k8s.io/etcd:3.5.21-0" does not exist at hash "499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1" in container runtime
	I0804 09:53:23.169875 2029364 docker.go:350] Removing image: registry.k8s.io/etcd:3.5.21-0
	I0804 09:53:23.169911 2029364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.21-0
	I0804 09:53:23.172017 2029364 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.0-beta.0" does not exist at hash "21d34a2aeacf50a8e47e77c972881726a216b817bbb276ea0f3c72200a4c5981" in container runtime
	I0804 09:53:23.172059 2029364 docker.go:350] Removing image: registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	I0804 09:53:23.172099 2029364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	I0804 09:53:23.180110 2029364 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.0-beta.0" does not exist at hash "d85eea91cc41d02b12e6ee2ad012006130cd8674faf51465c6d28a98448d8531" in container runtime
	I0804 09:53:23.180179 2029364 docker.go:350] Removing image: registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	I0804 09:53:23.180223 2029364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	I0804 09:53:23.265066 2029364 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I0804 09:53:23.265122 2029364 docker.go:350] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I0804 09:53:23.265170 2029364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.12.1
	I0804 09:53:23.266486 2029364 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.34.0-beta.0" does not exist at hash "c3709a85b683daaf3cdc79801e6f4718a0d57414e0238f231227818abd98f6bf" in container runtime
	I0804 09:53:23.266541 2029364 docker.go:350] Removing image: registry.k8s.io/kube-proxy:v1.34.0-beta.0
	I0804 09:53:23.266593 2029364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.34.0-beta.0
	I0804 09:53:23.289114 2029364 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.0-beta.0" does not exist at hash "9ad783615e1bcab361c82a9806b5005b33be3f6aa181043df837a10d1e523451" in container runtime
	I0804 09:53:23.289159 2029364 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.21-0
	I0804 09:53:23.289164 2029364 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0-beta.0
	I0804 09:53:23.289179 2029364 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0-beta.0
	I0804 09:53:23.289185 2029364 docker.go:350] Removing image: registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	I0804 09:53:23.289223 2029364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	I0804 09:53:23.289119 2029364 cache_images.go:117] "registry.k8s.io/pause:3.10" needs transfer: "registry.k8s.io/pause:3.10" does not exist at hash "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" in container runtime
	I0804 09:53:23.289282 2029364 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.0-beta.0
	I0804 09:53:23.289289 2029364 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.0-beta.0
	I0804 09:53:23.289292 2029364 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.21-0
	I0804 09:53:23.289305 2029364 docker.go:350] Removing image: registry.k8s.io/pause:3.10
	I0804 09:53:23.289356 2029364 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I0804 09:53:23.289448 2029364 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I0804 09:53:23.289371 2029364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10
	I0804 09:53:23.293271 2029364 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0-beta.0
	I0804 09:53:23.293358 2029364 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.0-beta.0
	I0804 09:53:23.311974 2029364 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.21-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.21-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.21-0': No such file or directory
	I0804 09:53:23.311998 2029364 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0-beta.0
	I0804 09:53:23.312010 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.21-0 --> /var/lib/minikube/images/etcd_3.5.21-0 (58948096 bytes)
	I0804 09:53:23.312090 2029364 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.0-beta.0
	I0804 09:53:23.313715 2029364 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0804 09:53:23.313756 2029364 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I0804 09:53:23.313779 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I0804 09:53:23.313802 2029364 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10
	I0804 09:53:23.313820 2029364 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.0-beta.0': No such file or directory
	I0804 09:53:23.313836 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.34.0-beta.0 (16935936 bytes)
	I0804 09:53:23.313839 2029364 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.0-beta.0': No such file or directory
	I0804 09:53:23.313862 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.34.0-beta.0 (26486784 bytes)
	I0804 09:53:23.313870 2029364 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.0-beta.0': No such file or directory
	I0804 09:53:23.313881 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.34.0-beta.0 (25640448 bytes)
	I0804 09:53:23.363895 2029364 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.0-beta.0': No such file or directory
	I0804 09:53:23.363940 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.34.0-beta.0 (22404608 bytes)
	I0804 09:53:23.396787 2029364 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10: stat -c "%s %y" /var/lib/minikube/images/pause_3.10: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10': No such file or directory
	I0804 09:53:23.396832 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 --> /var/lib/minikube/images/pause_3.10 (321024 bytes)
	I0804 09:53:23.508638 2029364 docker.go:317] Loading image: /var/lib/minikube/images/pause_3.10
	I0804 09:53:23.508674 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10 | docker load"
	I0804 09:53:23.609470 2029364 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 from cache
	I0804 09:53:23.637714 2029364 docker.go:317] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.0-beta.0
	I0804 09:53:23.637749 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.0-beta.0 | docker load"
	I0804 09:53:24.593952 2029364 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0-beta.0 from cache
	I0804 09:53:24.594004 2029364 docker.go:317] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I0804 09:53:24.594018 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.12.1 | docker load"
	I0804 09:53:25.510653 2029364 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I0804 09:53:25.510705 2029364 docker.go:317] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.0-beta.0
	I0804 09:53:25.510752 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.34.0-beta.0 | docker load"
	I0804 09:53:25.514110 2029364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 09:53:26.351235 2029364 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0-beta.0 from cache
	I0804 09:53:26.351276 2029364 docker.go:317] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.0-beta.0
	I0804 09:53:26.351290 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.34.0-beta.0 | docker load"
	I0804 09:53:26.351340 2029364 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0804 09:53:26.351394 2029364 docker.go:350] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 09:53:26.351442 2029364 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 09:53:26.933692 2029364 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0-beta.0 from cache
	I0804 09:53:26.933738 2029364 docker.go:317] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.0-beta.0
	I0804 09:53:26.933755 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.34.0-beta.0 | docker load"
	I0804 09:53:26.933768 2029364 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0804 09:53:26.933844 2029364 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0804 09:53:27.606620 2029364 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0-beta.0 from cache
	I0804 09:53:27.606653 2029364 docker.go:317] Loading image: /var/lib/minikube/images/etcd_3.5.21-0
	I0804 09:53:27.606666 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.21-0 | docker load"
	I0804 09:53:27.606790 2029364 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0804 09:53:27.606826 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0804 09:53:29.567727 2029364 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.21-0 | docker load": (1.961035029s)
	I0804 09:53:29.567765 2029364 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.21-0 from cache
	I0804 09:53:29.567810 2029364 docker.go:317] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0804 09:53:29.567847 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0804 09:53:29.989357 2029364 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0804 09:53:29.989462 2029364 cache_images.go:124] Successfully loaded all cached images
	I0804 09:53:29.989475 2029364 cache_images.go:93] duration metric: took 6.960022783s to LoadCachedImages
	I0804 09:53:29.989490 2029364 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0-beta.0 docker true true} ...
	I0804 09:53:29.989653 2029364 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-499486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:no-preload-499486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 09:53:29.989726 2029364 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 09:53:30.040993 2029364 cni.go:84] Creating CNI manager for ""
	I0804 09:53:30.041028 2029364 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:53:30.041046 2029364 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 09:53:30.041068 2029364 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-499486 NodeName:no-preload-499486 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 09:53:30.041230 2029364 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-499486"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 09:53:30.041313 2029364 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 09:53:30.050486 2029364 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.0-beta.0': No such file or directory
	
	Initiating transfer...
	I0804 09:53:30.050535 2029364 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 09:53:30.059524 2029364 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubectl.sha256
	I0804 09:53:30.059571 2029364 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:53:30.059574 2029364 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/linux/amd64/v1.34.0-beta.0/kubelet
	I0804 09:53:30.059622 2029364 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl
	I0804 09:53:30.059654 2029364 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0-beta.0/kubeadm
	I0804 09:53:30.064546 2029364 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0-beta.0/kubeadm': No such file or directory
	I0804 09:53:30.064574 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/linux/amd64/v1.34.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.34.0-beta.0/kubeadm (71512248 bytes)
	I0804 09:53:30.064628 2029364 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0-beta.0/kubectl': No such file or directory
	I0804 09:53:30.064674 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/linux/amd64/v1.34.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl (58802360 bytes)
	I0804 09:53:42.132659 2029364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 09:53:42.145296 2029364 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0-beta.0/kubelet
	I0804 09:53:42.149064 2029364 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet': No such file or directory
	I0804 09:53:42.149100 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/linux/amd64/v1.34.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.34.0-beta.0/kubelet (57733412 bytes)
	I0804 09:53:42.297978 2029364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 09:53:42.307549 2029364 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0804 09:53:42.324720 2029364 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 09:53:42.341994 2029364 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0804 09:53:42.359241 2029364 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0804 09:53:42.362856 2029364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 09:53:42.373373 2029364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:53:42.459968 2029364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 09:53:42.474319 2029364 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486 for IP: 192.168.94.2
	I0804 09:53:42.474353 2029364 certs.go:194] generating shared ca certs ...
	I0804 09:53:42.474378 2029364 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:53:42.474547 2029364 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 09:53:42.474597 2029364 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 09:53:42.474611 2029364 certs.go:256] generating profile certs ...
	I0804 09:53:42.474687 2029364 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/client.key
	I0804 09:53:42.474718 2029364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/client.crt with IP's: []
	I0804 09:53:42.518947 2029364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/client.crt ...
	I0804 09:53:42.518975 2029364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/client.crt: {Name:mk9aa0a7d10cd4daf66b63ea39fb6e1e8905133c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:53:42.519163 2029364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/client.key ...
	I0804 09:53:42.519177 2029364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/client.key: {Name:mk818b8add215415977ca8c534fca2d16bb075bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:53:42.519289 2029364 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/apiserver.key.e2e5da35
	I0804 09:53:42.519312 2029364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/apiserver.crt.e2e5da35 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0804 09:53:42.928473 2029364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/apiserver.crt.e2e5da35 ...
	I0804 09:53:42.928514 2029364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/apiserver.crt.e2e5da35: {Name:mk8925f678f5bffa1e41ddf1f39a8f0e34993ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:53:42.928701 2029364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/apiserver.key.e2e5da35 ...
	I0804 09:53:42.928716 2029364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/apiserver.key.e2e5da35: {Name:mk58849c9f8521803d593027b5817a9e62e63e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:53:42.928847 2029364 certs.go:381] copying /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/apiserver.crt.e2e5da35 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/apiserver.crt
	I0804 09:53:42.930494 2029364 certs.go:385] copying /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/apiserver.key.e2e5da35 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/apiserver.key
	I0804 09:53:42.930626 2029364 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/proxy-client.key
	I0804 09:53:42.930663 2029364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/proxy-client.crt with IP's: []
	I0804 09:53:43.101526 2029364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/proxy-client.crt ...
	I0804 09:53:43.101556 2029364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/proxy-client.crt: {Name:mk6c623e048f47bae623e9f92ca216d74140161f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:53:43.101746 2029364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/proxy-client.key ...
	I0804 09:53:43.101763 2029364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/proxy-client.key: {Name:mk1c2dd88b91cfa62a02af8c513ee532faf09b25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:53:43.101985 2029364 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 09:53:43.102032 2029364 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 09:53:43.102049 2029364 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 09:53:43.102079 2029364 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 09:53:43.102113 2029364 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 09:53:43.102147 2029364 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 09:53:43.102199 2029364 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 09:53:43.102852 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 09:53:43.127439 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 09:53:43.151446 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 09:53:43.174840 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 09:53:43.198308 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 09:53:43.224516 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 09:53:43.248856 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 09:53:43.278114 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 09:53:43.304228 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 09:53:43.328647 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 09:53:43.351308 2029364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 09:53:43.376858 2029364 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 09:53:43.393926 2029364 ssh_runner.go:195] Run: openssl version
	I0804 09:53:43.399715 2029364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 09:53:43.408678 2029364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 09:53:43.412469 2029364 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 09:53:43.412522 2029364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 09:53:43.419224 2029364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 09:53:43.428215 2029364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 09:53:43.437015 2029364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 09:53:43.440170 2029364 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 09:53:43.440212 2029364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 09:53:43.446654 2029364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 09:53:43.455614 2029364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 09:53:43.464132 2029364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:53:43.467383 2029364 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:53:43.467423 2029364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:53:43.473831 2029364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 09:53:43.483240 2029364 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 09:53:43.486465 2029364 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 09:53:43.486527 2029364 kubeadm.go:392] StartCluster: {Name:no-preload-499486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:no-preload-499486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:53:43.486660 2029364 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 09:53:43.507872 2029364 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 09:53:43.516818 2029364 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 09:53:43.525214 2029364 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0804 09:53:43.525294 2029364 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 09:53:43.533642 2029364 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 09:53:43.533660 2029364 kubeadm.go:157] found existing configuration files:
	
	I0804 09:53:43.533701 2029364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 09:53:43.541672 2029364 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 09:53:43.541727 2029364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 09:53:43.549784 2029364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 09:53:43.558259 2029364 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 09:53:43.558332 2029364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 09:53:43.568068 2029364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 09:53:43.576631 2029364 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 09:53:43.576692 2029364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 09:53:43.585034 2029364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 09:53:43.593799 2029364 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 09:53:43.593852 2029364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 09:53:43.601701 2029364 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0804 09:53:43.658167 2029364 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0804 09:53:43.658398 2029364 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0804 09:53:43.718693 2029364 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 09:57:52.927467 2029364 kubeadm.go:310] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.94.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 09:57:52.927610 2029364 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 09:57:52.930763 2029364 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0-beta.0
	I0804 09:57:52.930870 2029364 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 09:57:52.931028 2029364 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0804 09:57:52.931123 2029364 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0804 09:57:52.931175 2029364 kubeadm.go:310] OS: Linux
	I0804 09:57:52.931232 2029364 kubeadm.go:310] CGROUPS_CPU: enabled
	I0804 09:57:52.931304 2029364 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0804 09:57:52.931384 2029364 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0804 09:57:52.931443 2029364 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0804 09:57:52.931503 2029364 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0804 09:57:52.931561 2029364 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0804 09:57:52.931605 2029364 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0804 09:57:52.931649 2029364 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0804 09:57:52.931697 2029364 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0804 09:57:52.931805 2029364 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 09:57:52.931949 2029364 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 09:57:52.932105 2029364 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 09:57:52.932203 2029364 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 09:57:52.933784 2029364 out.go:235]   - Generating certificates and keys ...
	I0804 09:57:52.933887 2029364 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 09:57:52.933963 2029364 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 09:57:52.934061 2029364 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0804 09:57:52.934149 2029364 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0804 09:57:52.934246 2029364 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0804 09:57:52.934337 2029364 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0804 09:57:52.934433 2029364 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0804 09:57:52.934584 2029364 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-499486] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0804 09:57:52.934635 2029364 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0804 09:57:52.934743 2029364 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-499486] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0804 09:57:52.934801 2029364 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0804 09:57:52.934884 2029364 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0804 09:57:52.934962 2029364 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0804 09:57:52.935064 2029364 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 09:57:52.935146 2029364 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 09:57:52.935227 2029364 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 09:57:52.935296 2029364 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 09:57:52.935377 2029364 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 09:57:52.935463 2029364 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 09:57:52.935561 2029364 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 09:57:52.935653 2029364 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 09:57:52.937485 2029364 out.go:235]   - Booting up control plane ...
	I0804 09:57:52.937602 2029364 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 09:57:52.937715 2029364 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 09:57:52.937808 2029364 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 09:57:52.937951 2029364 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 09:57:52.938096 2029364 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0804 09:57:52.938197 2029364 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0804 09:57:52.938268 2029364 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 09:57:52.938311 2029364 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 09:57:52.938500 2029364 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 09:57:52.938658 2029364 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 09:57:52.938740 2029364 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001529911s
	I0804 09:57:52.938860 2029364 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0804 09:57:52.938973 2029364 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I0804 09:57:52.939050 2029364 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0804 09:57:52.939120 2029364 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0804 09:57:52.939183 2029364 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.932197013s
	I0804 09:57:52.939239 2029364 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 33.863968891s
	I0804 09:57:52.939304 2029364 kubeadm.go:310] [control-plane-check] kube-apiserver is not healthy after 4m0.000149199s
	I0804 09:57:52.939310 2029364 kubeadm.go:310] 
	I0804 09:57:52.939384 2029364 kubeadm.go:310] A control plane component may have crashed or exited when started by the container runtime.
	I0804 09:57:52.939485 2029364 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 09:57:52.939590 2029364 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0804 09:57:52.939708 2029364 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	I0804 09:57:52.939799 2029364 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0804 09:57:52.939868 2029364 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	I0804 09:57:52.939920 2029364 kubeadm.go:310] 
	W0804 09:57:52.940002 2029364 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-499486] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-499486] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001529911s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 3.932197013s
	[control-plane-check] kube-scheduler is healthy after 33.863968891s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000149199s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.94.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.94.2:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-499486] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-499486] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001529911s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 3.932197013s
	[control-plane-check] kube-scheduler is healthy after 33.863968891s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000149199s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.94.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.94.2:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	I0804 09:57:52.940041 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0804 09:57:53.772205 2029364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 09:57:53.783949 2029364 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0804 09:57:53.784002 2029364 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 09:57:53.793352 2029364 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 09:57:53.793371 2029364 kubeadm.go:157] found existing configuration files:
	
	I0804 09:57:53.793402 2029364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 09:57:53.801453 2029364 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 09:57:53.801515 2029364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 09:57:53.810055 2029364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 09:57:53.818087 2029364 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 09:57:53.818127 2029364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 09:57:53.825646 2029364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 09:57:53.833573 2029364 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 09:57:53.833617 2029364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 09:57:53.841194 2029364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 09:57:53.848781 2029364 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 09:57:53.848833 2029364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 09:57:53.856430 2029364 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0804 09:57:53.894965 2029364 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0-beta.0
	I0804 09:57:53.895019 2029364 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 09:57:53.910819 2029364 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0804 09:57:53.910893 2029364 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0804 09:57:53.910933 2029364 kubeadm.go:310] OS: Linux
	I0804 09:57:53.910969 2029364 kubeadm.go:310] CGROUPS_CPU: enabled
	I0804 09:57:53.911028 2029364 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0804 09:57:53.911107 2029364 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0804 09:57:53.911185 2029364 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0804 09:57:53.911229 2029364 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0804 09:57:53.911269 2029364 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0804 09:57:53.911326 2029364 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0804 09:57:53.911379 2029364 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0804 09:57:53.911466 2029364 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0804 09:57:53.973628 2029364 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 09:57:53.973728 2029364 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 09:57:53.973847 2029364 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 09:57:53.986499 2029364 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 09:57:53.994716 2029364 out.go:235]   - Generating certificates and keys ...
	I0804 09:57:53.994838 2029364 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 09:57:53.994947 2029364 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 09:57:53.995073 2029364 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 09:57:53.995166 2029364 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 09:57:53.995259 2029364 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 09:57:53.995339 2029364 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 09:57:53.995456 2029364 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 09:57:53.995562 2029364 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 09:57:53.995665 2029364 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 09:57:53.995759 2029364 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 09:57:53.995814 2029364 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 09:57:53.995894 2029364 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 09:57:54.369454 2029364 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 09:57:54.661019 2029364 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 09:57:55.273459 2029364 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 09:57:55.379128 2029364 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 09:57:55.668038 2029364 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 09:57:55.668677 2029364 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 09:57:55.671159 2029364 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 09:57:55.672781 2029364 out.go:235]   - Booting up control plane ...
	I0804 09:57:55.672899 2029364 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 09:57:55.672994 2029364 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 09:57:55.673989 2029364 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 09:57:55.691837 2029364 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 09:57:55.691968 2029364 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0804 09:57:55.698832 2029364 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0804 09:57:55.700243 2029364 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 09:57:55.700309 2029364 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 09:57:55.796088 2029364 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 09:57:55.796207 2029364 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 09:57:56.801216 2029364 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001756019s
	I0804 09:57:56.801383 2029364 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0804 09:57:56.801497 2029364 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I0804 09:57:56.801611 2029364 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0804 09:57:56.801690 2029364 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0804 09:58:01.363885 2029364 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 4.562736657s
	I0804 09:58:31.445488 2029364 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 34.644357034s
	I0804 10:01:56.801885 2029364 kubeadm.go:310] [control-plane-check] kube-apiserver is not healthy after 4m0.000225132s
	I0804 10:01:56.801953 2029364 kubeadm.go:310] 
	I0804 10:01:56.802188 2029364 kubeadm.go:310] A control plane component may have crashed or exited when started by the container runtime.
	I0804 10:01:56.802400 2029364 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 10:01:56.802630 2029364 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0804 10:01:56.802867 2029364 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	I0804 10:01:56.803036 2029364 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0804 10:01:56.803210 2029364 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	I0804 10:01:56.803224 2029364 kubeadm.go:310] 
	I0804 10:01:56.805935 2029364 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0804 10:01:56.806126 2029364 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0804 10:01:56.806217 2029364 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 10:01:56.806509 2029364 kubeadm.go:310] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.94.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:01:56.806616 2029364 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 10:01:56.806659 2029364 kubeadm.go:394] duration metric: took 8m13.320138422s to StartCluster
	I0804 10:01:56.806705 2029364 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 10:01:56.806759 2029364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 10:01:56.846546 2029364 cri.go:89] found id: "e784b6261d471625a60a31ed5407f8497591c9eba33d5e9302574480f8028cf2"
	I0804 10:01:56.846573 2029364 cri.go:89] found id: ""
	I0804 10:01:56.846597 2029364 logs.go:282] 1 containers: [e784b6261d471625a60a31ed5407f8497591c9eba33d5e9302574480f8028cf2]
	I0804 10:01:56.846660 2029364 ssh_runner.go:195] Run: which crictl
	I0804 10:01:56.850583 2029364 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 10:01:56.850642 2029364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 10:01:56.883751 2029364 cri.go:89] found id: "ec6b724c64004bec6a42648bb7b307c66aa8a4bfb35ff9968dd22cb4b4f22f10"
	I0804 10:01:56.883778 2029364 cri.go:89] found id: ""
	I0804 10:01:56.883789 2029364 logs.go:282] 1 containers: [ec6b724c64004bec6a42648bb7b307c66aa8a4bfb35ff9968dd22cb4b4f22f10]
	I0804 10:01:56.883846 2029364 ssh_runner.go:195] Run: which crictl
	I0804 10:01:56.887822 2029364 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 10:01:56.887870 2029364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 10:01:56.923524 2029364 cri.go:89] found id: ""
	I0804 10:01:56.923556 2029364 logs.go:282] 0 containers: []
	W0804 10:01:56.923566 2029364 logs.go:284] No container was found matching "coredns"
	I0804 10:01:56.923576 2029364 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 10:01:56.923629 2029364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 10:01:56.957236 2029364 cri.go:89] found id: "2a1c20b2ffee8d9ae54b20db9e9ce1996a5a566bca513bff4186cbd73d384022"
	I0804 10:01:56.957302 2029364 cri.go:89] found id: ""
	I0804 10:01:56.957313 2029364 logs.go:282] 1 containers: [2a1c20b2ffee8d9ae54b20db9e9ce1996a5a566bca513bff4186cbd73d384022]
	I0804 10:01:56.957373 2029364 ssh_runner.go:195] Run: which crictl
	I0804 10:01:56.961043 2029364 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 10:01:56.961118 2029364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 10:01:56.994434 2029364 cri.go:89] found id: ""
	I0804 10:01:56.994465 2029364 logs.go:282] 0 containers: []
	W0804 10:01:56.994475 2029364 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:01:56.994484 2029364 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 10:01:56.994545 2029364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 10:01:57.028564 2029364 cri.go:89] found id: "06b532ee8dabc7eddc8807ffcc1f409bc2426ea25b7c999a22e866007c122c2a"
	I0804 10:01:57.028584 2029364 cri.go:89] found id: ""
	I0804 10:01:57.028591 2029364 logs.go:282] 1 containers: [06b532ee8dabc7eddc8807ffcc1f409bc2426ea25b7c999a22e866007c122c2a]
	I0804 10:01:57.028633 2029364 ssh_runner.go:195] Run: which crictl
	I0804 10:01:57.032290 2029364 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 10:01:57.032351 2029364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 10:01:57.066561 2029364 cri.go:89] found id: ""
	I0804 10:01:57.066591 2029364 logs.go:282] 0 containers: []
	W0804 10:01:57.066602 2029364 logs.go:284] No container was found matching "kindnet"
	I0804 10:01:57.066623 2029364 logs.go:123] Gathering logs for kubelet ...
	I0804 10:01:57.066639 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:01:57.151632 2029364 logs.go:123] Gathering logs for kube-apiserver [e784b6261d471625a60a31ed5407f8497591c9eba33d5e9302574480f8028cf2] ...
	I0804 10:01:57.151671 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e784b6261d471625a60a31ed5407f8497591c9eba33d5e9302574480f8028cf2"
	I0804 10:01:57.194110 2029364 logs.go:123] Gathering logs for kube-scheduler [2a1c20b2ffee8d9ae54b20db9e9ce1996a5a566bca513bff4186cbd73d384022] ...
	I0804 10:01:57.194142 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a1c20b2ffee8d9ae54b20db9e9ce1996a5a566bca513bff4186cbd73d384022"
	I0804 10:01:57.256298 2029364 logs.go:123] Gathering logs for kube-controller-manager [06b532ee8dabc7eddc8807ffcc1f409bc2426ea25b7c999a22e866007c122c2a] ...
	I0804 10:01:57.256334 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06b532ee8dabc7eddc8807ffcc1f409bc2426ea25b7c999a22e866007c122c2a"
	I0804 10:01:57.294452 2029364 logs.go:123] Gathering logs for Docker ...
	I0804 10:01:57.294484 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:01:57.319888 2029364 logs.go:123] Gathering logs for dmesg ...
	I0804 10:01:57.319918 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:01:57.346126 2029364 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:01:57.346165 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:01:57.403229 2029364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:01:57.396023    6732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:01:57.396497    6732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:01:57.398064    6732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:01:57.398576    6732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:01:57.400142    6732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:01:57.396023    6732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:01:57.396497    6732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:01:57.398064    6732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:01:57.398576    6732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:01:57.400142    6732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:01:57.403256 2029364 logs.go:123] Gathering logs for etcd [ec6b724c64004bec6a42648bb7b307c66aa8a4bfb35ff9968dd22cb4b4f22f10] ...
	I0804 10:01:57.403273 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6b724c64004bec6a42648bb7b307c66aa8a4bfb35ff9968dd22cb4b4f22f10"
	I0804 10:01:57.438673 2029364 logs.go:123] Gathering logs for container status ...
	I0804 10:01:57.438701 2029364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:01:57.474948 2029364 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001756019s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 4.562736657s
	[control-plane-check] kube-scheduler is healthy after 34.644357034s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000225132s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.94.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.94.2:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	W0804 10:01:57.475012 2029364 out.go:270] * 
	* 
	W0804 10:01:57.475192 2029364 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001756019s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 4.562736657s
	[control-plane-check] kube-scheduler is healthy after 34.644357034s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000225132s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.94.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.94.2:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001756019s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 4.562736657s
	[control-plane-check] kube-scheduler is healthy after 34.644357034s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000225132s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.94.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.94.2:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 10:01:57.475217 2029364 out.go:270] * 
	* 
	W0804 10:01:57.477608 2029364 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 10:01:57.481210 2029364 out.go:201] 
	W0804 10:01:57.482266 2029364 out.go:270] X Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001756019s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 4.562736657s
	[control-plane-check] kube-scheduler is healthy after 34.644357034s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000225132s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.94.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.94.2:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001756019s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 4.562736657s
	[control-plane-check] kube-scheduler is healthy after 34.644357034s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000225132s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.94.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.94.2:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 10:01:57.482296 2029364 out.go:270] * 
	* 
	W0804 10:01:57.483925 2029364 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 10:01:57.485749 2029364 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p no-preload-499486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-499486
helpers_test.go:235: (dbg) docker inspect no-preload-499486:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a",
	        "Created": "2025-08-04T09:53:15.660442354Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2029936,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T09:53:15.69127721Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/hostname",
	        "HostsPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/hosts",
	        "LogPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a-json.log",
	        "Name": "/no-preload-499486",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-499486:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-499486",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a",
	                "LowerDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-499486",
	                "Source": "/var/lib/docker/volumes/no-preload-499486/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-499486",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-499486",
	                "name.minikube.sigs.k8s.io": "no-preload-499486",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d4bc85e8d71c1f7b19543fb7f72dfa5ec983493b724ce990a1931d665bf24114",
	            "SandboxKey": "/var/run/docker/netns/d4bc85e8d71c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-499486": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:7c:aa:5e:3e:a0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b62d1a98319626f2ebd728777c7c3c44586a7c69bc74cc1eeb93ee4ca2df5d38",
	                    "EndpointID": "5d5bec790cc22478e1fe74ad8dd7d943661e5a0fe9f47479f30e041ca21c6066",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-499486",
	                        "cdcf9a40640c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499486 -n no-preload-499486
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499486 -n no-preload-499486: exit status 6 (270.426539ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 10:01:57.826888 2127311 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-499486" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-499486" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (523.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (505.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-768931 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0
E0804 09:54:32.790619 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p newest-cni-768931 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0: exit status 80 (8m25.528440684s)

                                                
                                                
-- stdout --
	* [newest-cni-768931] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-768931" primary control-plane node in "newest-cni-768931" cluster
	* Pulling base image v0.0.47-1753871403-21198 ...
	* Creating docker container (CPUs=2, Memory=3072MB) ...* Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...  - kubeadm.pod-network-cidr=10.42.0.0/16
	  - Generating certificates and keys ...  - Booting up control plane ...  - Generating certificates and keys ...  - Booting up control plane ...
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 09:54:30.659728 2043876 out.go:345] Setting OutFile to fd 1 ...
	I0804 09:54:30.659963 2043876 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:54:30.659972 2043876 out.go:358] Setting ErrFile to fd 2...
	I0804 09:54:30.659976 2043876 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:54:30.660139 2043876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 09:54:30.660714 2043876 out.go:352] Setting JSON to false
	I0804 09:54:30.662011 2043876 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":153360,"bootTime":1754147911,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 09:54:30.662102 2043876 start.go:140] virtualization: kvm guest
	I0804 09:54:30.663921 2043876 out.go:177] * [newest-cni-768931] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 09:54:30.664977 2043876 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 09:54:30.664984 2043876 notify.go:220] Checking for updates...
	I0804 09:54:30.667015 2043876 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 09:54:30.668099 2043876 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 09:54:30.669124 2043876 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 09:54:30.670120 2043876 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 09:54:30.671102 2043876 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 09:54:30.672483 2043876 config.go:182] Loaded profile config "default-k8s-diff-port-670157": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
	I0804 09:54:30.672578 2043876 config.go:182] Loaded profile config "kubernetes-upgrade-402519": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:54:30.672666 2043876 config.go:182] Loaded profile config "no-preload-499486": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:54:30.672748 2043876 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 09:54:30.695540 2043876 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 09:54:30.695683 2043876 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:54:30.746485 2043876 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-08-04 09:54:30.737174163 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:54:30.746611 2043876 docker.go:318] overlay module found
	I0804 09:54:30.748319 2043876 out.go:177] * Using the docker driver based on user configuration
	I0804 09:54:30.749392 2043876 start.go:304] selected driver: docker
	I0804 09:54:30.749407 2043876 start.go:918] validating driver "docker" against <nil>
	I0804 09:54:30.749422 2043876 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 09:54:30.750199 2043876 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:54:30.799225 2043876 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-08-04 09:54:30.79040601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:54:30.799401 2043876 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W0804 09:54:30.799438 2043876 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0804 09:54:30.799688 2043876 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0804 09:54:30.801644 2043876 out.go:177] * Using Docker driver with root privileges
	I0804 09:54:30.802629 2043876 cni.go:84] Creating CNI manager for ""
	I0804 09:54:30.802706 2043876 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:54:30.802722 2043876 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0804 09:54:30.802812 2043876 start.go:348] cluster config:
	{Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false D
isableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:54:30.803974 2043876 out.go:177] * Starting "newest-cni-768931" primary control-plane node in "newest-cni-768931" cluster
	I0804 09:54:30.804862 2043876 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 09:54:30.805862 2043876 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 09:54:30.806857 2043876 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 09:54:30.806891 2043876 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 09:54:30.806904 2043876 cache.go:56] Caching tarball of preloaded images
	I0804 09:54:30.806939 2043876 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 09:54:30.807004 2043876 preload.go:172] Found /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 09:54:30.807020 2043876 cache.go:59] Finished verifying existence of preloaded tar for v1.34.0-beta.0 on docker
	I0804 09:54:30.807134 2043876 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/config.json ...
	I0804 09:54:30.807169 2043876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/config.json: {Name:mk3c02ad2eccb9557ab7d918eb284a8943b424d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:54:30.826663 2043876 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 09:54:30.826685 2043876 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 09:54:30.826700 2043876 cache.go:230] Successfully downloaded all kic artifacts
	I0804 09:54:30.826738 2043876 start.go:360] acquireMachinesLock for newest-cni-768931: {Name:mk60747b86b31a8b440009760f939cd98b70b1b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 09:54:30.826831 2043876 start.go:364] duration metric: took 72.724µs to acquireMachinesLock for "newest-cni-768931"
	I0804 09:54:30.826853 2043876 start.go:93] Provisioning new machine with config: &{Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 09:54:30.826906 2043876 start.go:125] createHost starting for "" (driver="docker")
	I0804 09:54:30.829059 2043876 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0804 09:54:30.829301 2043876 start.go:159] libmachine.API.Create for "newest-cni-768931" (driver="docker")
	I0804 09:54:30.829338 2043876 client.go:168] LocalClient.Create starting
	I0804 09:54:30.829413 2043876 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem
	I0804 09:54:30.829445 2043876 main.go:141] libmachine: Decoding PEM data...
	I0804 09:54:30.829461 2043876 main.go:141] libmachine: Parsing certificate...
	I0804 09:54:30.829538 2043876 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem
	I0804 09:54:30.829560 2043876 main.go:141] libmachine: Decoding PEM data...
	I0804 09:54:30.829571 2043876 main.go:141] libmachine: Parsing certificate...
	I0804 09:54:30.829876 2043876 cli_runner.go:164] Run: docker network inspect newest-cni-768931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0804 09:54:30.846860 2043876 cli_runner.go:211] docker network inspect newest-cni-768931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0804 09:54:30.846941 2043876 network_create.go:284] running [docker network inspect newest-cni-768931] to gather additional debugging logs...
	I0804 09:54:30.846968 2043876 cli_runner.go:164] Run: docker network inspect newest-cni-768931
	W0804 09:54:30.862282 2043876 cli_runner.go:211] docker network inspect newest-cni-768931 returned with exit code 1
	I0804 09:54:30.862316 2043876 network_create.go:287] error running [docker network inspect newest-cni-768931]: docker network inspect newest-cni-768931: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-768931 not found
	I0804 09:54:30.862333 2043876 network_create.go:289] output of [docker network inspect newest-cni-768931]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-768931 not found
	
	** /stderr **
	I0804 09:54:30.862479 2043876 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 09:54:30.879567 2043876 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b4122743d943 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:66:3d:c4:8d:93} reservation:<nil>}
	I0804 09:54:30.880241 2043876 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8451716aa30c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:1d:5b:3c:f6:bd} reservation:<nil>}
	I0804 09:54:30.880899 2043876 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9d42b63aa0b7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3e:9d:f7:36:38:48} reservation:<nil>}
	I0804 09:54:30.881718 2043876 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d8d4a0}
	I0804 09:54:30.881742 2043876 network_create.go:124] attempt to create docker network newest-cni-768931 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0804 09:54:30.881780 2043876 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-768931 newest-cni-768931
	I0804 09:54:30.932377 2043876 network_create.go:108] docker network newest-cni-768931 192.168.76.0/24 created
	I0804 09:54:30.932413 2043876 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-768931" container
	I0804 09:54:30.932483 2043876 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0804 09:54:30.948990 2043876 cli_runner.go:164] Run: docker volume create newest-cni-768931 --label name.minikube.sigs.k8s.io=newest-cni-768931 --label created_by.minikube.sigs.k8s.io=true
	I0804 09:54:30.966129 2043876 oci.go:103] Successfully created a docker volume newest-cni-768931
	I0804 09:54:30.966255 2043876 cli_runner.go:164] Run: docker run --rm --name newest-cni-768931-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-768931 --entrypoint /usr/bin/test -v newest-cni-768931:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d -d /var/lib
	I0804 09:54:31.383528 2043876 oci.go:107] Successfully prepared a docker volume newest-cni-768931
	I0804 09:54:31.383571 2043876 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 09:54:31.383597 2043876 kic.go:194] Starting extracting preloaded images to volume ...
	I0804 09:54:31.383662 2043876 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-768931:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d -I lz4 -xf /preloaded.tar -C /extractDir
	I0804 09:54:34.965175 2043876 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-768931:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d -I lz4 -xf /preloaded.tar -C /extractDir: (3.581470976s)
	I0804 09:54:34.965206 2043876 kic.go:203] duration metric: took 3.581605647s to extract preloaded images to volume ...
	W0804 09:54:34.965407 2043876 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0804 09:54:34.965514 2043876 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0804 09:54:35.012921 2043876 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-768931 --name newest-cni-768931 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-768931 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-768931 --network newest-cni-768931 --ip 192.168.76.2 --volume newest-cni-768931:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d
	I0804 09:54:35.262666 2043876 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Running}}
	I0804 09:54:35.281446 2043876 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 09:54:35.300034 2043876 cli_runner.go:164] Run: docker exec newest-cni-768931 stat /var/lib/dpkg/alternatives/iptables
	I0804 09:54:35.341200 2043876 oci.go:144] the created container "newest-cni-768931" has a running status.
	I0804 09:54:35.341250 2043876 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa...
	I0804 09:54:35.899462 2043876 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0804 09:54:35.921955 2043876 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 09:54:35.942387 2043876 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0804 09:54:35.942412 2043876 kic_runner.go:114] Args: [docker exec --privileged newest-cni-768931 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0804 09:54:35.983918 2043876 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 09:54:36.003566 2043876 machine.go:93] provisionDockerMachine start ...
	I0804 09:54:36.003680 2043876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 09:54:36.021719 2043876 main.go:141] libmachine: Using SSH client type: native
	I0804 09:54:36.021965 2043876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0804 09:54:36.021978 2043876 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 09:54:36.148731 2043876 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-768931
	
	I0804 09:54:36.148771 2043876 ubuntu.go:169] provisioning hostname "newest-cni-768931"
	I0804 09:54:36.148845 2043876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 09:54:36.166928 2043876 main.go:141] libmachine: Using SSH client type: native
	I0804 09:54:36.167193 2043876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0804 09:54:36.167210 2043876 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-768931 && echo "newest-cni-768931" | sudo tee /etc/hostname
	I0804 09:54:36.302664 2043876 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-768931
	
	I0804 09:54:36.302760 2043876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 09:54:36.322976 2043876 main.go:141] libmachine: Using SSH client type: native
	I0804 09:54:36.323305 2043876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0804 09:54:36.323336 2043876 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-768931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-768931/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-768931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 09:54:36.449150 2043876 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 09:54:36.449199 2043876 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 09:54:36.449296 2043876 ubuntu.go:177] setting up certificates
	I0804 09:54:36.449311 2043876 provision.go:84] configureAuth start
	I0804 09:54:36.449386 2043876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 09:54:36.469748 2043876 provision.go:143] copyHostCerts
	I0804 09:54:36.469812 2043876 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 09:54:36.469825 2043876 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 09:54:36.469882 2043876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 09:54:36.469979 2043876 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 09:54:36.469991 2043876 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 09:54:36.470021 2043876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 09:54:36.470105 2043876 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 09:54:36.470117 2043876 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 09:54:36.470150 2043876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 09:54:36.470212 2043876 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.newest-cni-768931 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-768931]
	I0804 09:54:36.613447 2043876 provision.go:177] copyRemoteCerts
	I0804 09:54:36.613503 2043876 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 09:54:36.613545 2043876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 09:54:36.631854 2043876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 09:54:36.721790 2043876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 09:54:36.743688 2043876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 09:54:36.765147 2043876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 09:54:36.786378 2043876 provision.go:87] duration metric: took 337.051898ms to configureAuth
	I0804 09:54:36.786411 2043876 ubuntu.go:193] setting minikube options for container-runtime
	I0804 09:54:36.786574 2043876 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:54:36.786621 2043876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 09:54:36.804168 2043876 main.go:141] libmachine: Using SSH client type: native
	I0804 09:54:36.804386 2043876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0804 09:54:36.804399 2043876 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 09:54:36.929541 2043876 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 09:54:36.929565 2043876 ubuntu.go:71] root file system type: overlay
	I0804 09:54:36.929699 2043876 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 09:54:36.929762 2043876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 09:54:36.947408 2043876 main.go:141] libmachine: Using SSH client type: native
	I0804 09:54:36.947685 2043876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0804 09:54:36.947746 2043876 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 09:54:37.084947 2043876 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 09:54:37.085040 2043876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 09:54:37.103022 2043876 main.go:141] libmachine: Using SSH client type: native
	I0804 09:54:37.103281 2043876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0804 09:54:37.103308 2043876 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 09:54:37.862495 2043876 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-07-25 11:32:36.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-08-04 09:54:37.077400451 +0000
	@@ -1,38 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	 StartLimitBurst=3
	 StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	+Restart=on-failure
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	 ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0804 09:54:37.862525 2043876 machine.go:96] duration metric: took 1.858926514s to provisionDockerMachine
	I0804 09:54:37.862538 2043876 client.go:171] duration metric: took 7.033190625s to LocalClient.Create
	I0804 09:54:37.862557 2043876 start.go:167] duration metric: took 7.033318497s to libmachine.API.Create "newest-cni-768931"
	I0804 09:54:37.862566 2043876 start.go:293] postStartSetup for "newest-cni-768931" (driver="docker")
	I0804 09:54:37.862580 2043876 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 09:54:37.862647 2043876 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 09:54:37.862702 2043876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 09:54:37.880161 2043876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 09:54:37.973930 2043876 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 09:54:37.976845 2043876 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 09:54:37.976870 2043876 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 09:54:37.976878 2043876 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 09:54:37.976885 2043876 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 09:54:37.976895 2043876 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 09:54:37.976944 2043876 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 09:54:37.977012 2043876 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 09:54:37.977095 2043876 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 09:54:37.984843 2043876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 09:54:38.006483 2043876 start.go:296] duration metric: took 143.903454ms for postStartSetup
	I0804 09:54:38.006795 2043876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 09:54:38.026438 2043876 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/config.json ...
	I0804 09:54:38.026714 2043876 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 09:54:38.026769 2043876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 09:54:38.045606 2043876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 09:54:38.138325 2043876 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 09:54:38.142611 2043876 start.go:128] duration metric: took 7.315688662s to createHost
	I0804 09:54:38.142640 2043876 start.go:83] releasing machines lock for "newest-cni-768931", held for 7.315798076s
	I0804 09:54:38.142708 2043876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 09:54:38.162012 2043876 ssh_runner.go:195] Run: cat /version.json
	I0804 09:54:38.162061 2043876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 09:54:38.162098 2043876 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 09:54:38.162176 2043876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 09:54:38.179768 2043876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 09:54:38.180853 2043876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 09:54:38.340914 2043876 ssh_runner.go:195] Run: systemctl --version
	I0804 09:54:38.345360 2043876 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 09:54:38.349677 2043876 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 09:54:38.372882 2043876 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 09:54:38.372947 2043876 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 09:54:38.397116 2043876 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0804 09:54:38.397151 2043876 start.go:495] detecting cgroup driver to use...
	I0804 09:54:38.397189 2043876 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 09:54:38.397350 2043876 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 09:54:38.411914 2043876 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:54:38.820611 2043876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 09:54:38.830936 2043876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 09:54:38.840573 2043876 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 09:54:38.840631 2043876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 09:54:38.850759 2043876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 09:54:38.859707 2043876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 09:54:38.868289 2043876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 09:54:38.876951 2043876 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 09:54:38.885332 2043876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 09:54:38.894054 2043876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 09:54:38.902705 2043876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 09:54:38.911339 2043876 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 09:54:38.919176 2043876 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 09:54:38.926671 2043876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:54:39.009843 2043876 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 09:54:39.092911 2043876 start.go:495] detecting cgroup driver to use...
	I0804 09:54:39.092957 2043876 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 09:54:39.093012 2043876 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 09:54:39.106076 2043876 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 09:54:39.106134 2043876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 09:54:39.117964 2043876 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 09:54:39.136293 2043876 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:54:39.531629 2043876 ssh_runner.go:195] Run: which cri-dockerd
	I0804 09:54:39.535428 2043876 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 09:54:39.543824 2043876 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 09:54:39.559995 2043876 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 09:54:39.638988 2043876 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 09:54:39.726868 2043876 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 09:54:39.726999 2043876 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 09:54:39.743953 2043876 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 09:54:39.754066 2043876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:54:39.828029 2043876 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 09:54:40.114020 2043876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 09:54:40.126105 2043876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 09:54:40.138595 2043876 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 09:54:40.220249 2043876 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 09:54:40.304551 2043876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:54:40.388118 2043876 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 09:54:40.401224 2043876 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 09:54:40.411269 2043876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:54:40.484681 2043876 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 09:54:40.544870 2043876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 09:54:40.556314 2043876 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 09:54:40.556374 2043876 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 09:54:40.559828 2043876 start.go:563] Will wait 60s for crictl version
	I0804 09:54:40.559872 2043876 ssh_runner.go:195] Run: which crictl
	I0804 09:54:40.562928 2043876 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 09:54:40.594194 2043876 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 09:54:40.594268 2043876 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 09:54:40.617382 2043876 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 09:54:40.644519 2043876 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 09:54:40.644596 2043876 cli_runner.go:164] Run: docker network inspect newest-cni-768931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 09:54:40.661689 2043876 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0804 09:54:40.665236 2043876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 09:54:40.677961 2043876 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0804 09:54:40.678886 2043876 kubeadm.go:875] updating cluster {Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 09:54:40.679112 2043876 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:54:41.068371 2043876 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:54:41.453760 2043876 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:54:41.845967 2043876 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 09:54:41.846134 2043876 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:54:42.229573 2043876 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:54:42.638091 2043876 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 09:54:43.022144 2043876 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 09:54:43.042054 2043876 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 09:54:43.042077 2043876 docker.go:633] Images already preloaded, skipping extraction
	I0804 09:54:43.042129 2043876 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 09:54:43.061571 2043876 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 09:54:43.061597 2043876 cache_images.go:85] Images are preloaded, skipping loading
	I0804 09:54:43.061610 2043876 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0-beta.0 docker true true} ...
	I0804 09:54:43.061714 2043876 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-768931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 09:54:43.061780 2043876 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 09:54:43.111838 2043876 cni.go:84] Creating CNI manager for ""
	I0804 09:54:43.111872 2043876 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 09:54:43.111886 2043876 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0804 09:54:43.111916 2043876 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-768931 NodeName:newest-cni-768931 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 09:54:43.112073 2043876 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-768931"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 09:54:43.112135 2043876 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 09:54:43.121048 2043876 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 09:54:43.121133 2043876 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 09:54:43.130208 2043876 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0804 09:54:43.146450 2043876 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 09:54:43.162486 2043876 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2300 bytes)
	I0804 09:54:43.178296 2043876 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0804 09:54:43.181347 2043876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 09:54:43.191365 2043876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 09:54:43.273214 2043876 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 09:54:43.285845 2043876 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931 for IP: 192.168.76.2
	I0804 09:54:43.285868 2043876 certs.go:194] generating shared ca certs ...
	I0804 09:54:43.285891 2043876 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:54:43.286068 2043876 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 09:54:43.286129 2043876 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 09:54:43.286143 2043876 certs.go:256] generating profile certs ...
	I0804 09:54:43.286215 2043876 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/client.key
	I0804 09:54:43.286233 2043876 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/client.crt with IP's: []
	I0804 09:54:43.351834 2043876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/client.crt ...
	I0804 09:54:43.351874 2043876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/client.crt: {Name:mkd36466fa14580c9de5a3fd465485b6c7231fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:54:43.352040 2043876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/client.key ...
	I0804 09:54:43.352051 2043876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/client.key: {Name:mk4fc98a79c39e713c7ffaa326bf003f88e84f2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:54:43.352131 2043876 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key.a5c16e02
	I0804 09:54:43.352147 2043876 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.crt.a5c16e02 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0804 09:54:43.444235 2043876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.crt.a5c16e02 ...
	I0804 09:54:43.444263 2043876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.crt.a5c16e02: {Name:mkae5a57f6accb0c8a4af5a0eb2ee45294eb1d6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:54:43.444417 2043876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key.a5c16e02 ...
	I0804 09:54:43.444430 2043876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key.a5c16e02: {Name:mkb9d605da278cbecc981cf6e7c9d8f035ac0ce0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:54:43.444509 2043876 certs.go:381] copying /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.crt.a5c16e02 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.crt
	I0804 09:54:43.444576 2043876 certs.go:385] copying /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key.a5c16e02 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key
	I0804 09:54:43.444630 2043876 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.key
	I0804 09:54:43.444645 2043876 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.crt with IP's: []
	I0804 09:54:43.736080 2043876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.crt ...
	I0804 09:54:43.736126 2043876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.crt: {Name:mk9b3a86df10a385a0a5b96369b9a7dbc665d7e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:54:43.736331 2043876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.key ...
	I0804 09:54:43.736354 2043876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.key: {Name:mked45ac713e2356e7980562e152ed66be792250 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 09:54:43.736607 2043876 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 09:54:43.736665 2043876 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 09:54:43.736680 2043876 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 09:54:43.736712 2043876 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 09:54:43.736744 2043876 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 09:54:43.736773 2043876 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 09:54:43.736834 2043876 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 09:54:43.737593 2043876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 09:54:43.761138 2043876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 09:54:43.783022 2043876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 09:54:43.804971 2043876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 09:54:43.826521 2043876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 09:54:43.848942 2043876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 09:54:43.870639 2043876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 09:54:43.892924 2043876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 09:54:43.914411 2043876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 09:54:43.937049 2043876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 09:54:43.958715 2043876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 09:54:43.980006 2043876 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 09:54:43.996750 2043876 ssh_runner.go:195] Run: openssl version
	I0804 09:54:44.002147 2043876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 09:54:44.011652 2043876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 09:54:44.015323 2043876 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 09:54:44.015379 2043876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 09:54:44.021857 2043876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 09:54:44.030394 2043876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 09:54:44.038844 2043876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:54:44.041845 2043876 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:54:44.041900 2043876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 09:54:44.048232 2043876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 09:54:44.056440 2043876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 09:54:44.064786 2043876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 09:54:44.068261 2043876 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 09:54:44.068315 2043876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 09:54:44.074690 2043876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 09:54:44.083118 2043876 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 09:54:44.086282 2043876 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 09:54:44.086342 2043876 kubeadm.go:392] StartCluster: {Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:54:44.086446 2043876 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 09:54:44.104437 2043876 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 09:54:44.113941 2043876 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 09:54:44.123385 2043876 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0804 09:54:44.123449 2043876 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 09:54:44.132120 2043876 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 09:54:44.132137 2043876 kubeadm.go:157] found existing configuration files:
	
	I0804 09:54:44.132171 2043876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 09:54:44.140100 2043876 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 09:54:44.140148 2043876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 09:54:44.147629 2043876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 09:54:44.155419 2043876 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 09:54:44.155462 2043876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 09:54:44.163180 2043876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 09:54:44.171122 2043876 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 09:54:44.171175 2043876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 09:54:44.178750 2043876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 09:54:44.186482 2043876 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 09:54:44.186532 2043876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 09:54:44.193883 2043876 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0804 09:54:44.229398 2043876 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0-beta.0
	I0804 09:54:44.229447 2043876 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 09:54:44.246890 2043876 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0804 09:54:44.246954 2043876 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0804 09:54:44.247021 2043876 kubeadm.go:310] OS: Linux
	I0804 09:54:44.247111 2043876 kubeadm.go:310] CGROUPS_CPU: enabled
	I0804 09:54:44.247206 2043876 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0804 09:54:44.247247 2043876 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0804 09:54:44.247312 2043876 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0804 09:54:44.247389 2043876 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0804 09:54:44.247463 2043876 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0804 09:54:44.247533 2043876 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0804 09:54:44.247607 2043876 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0804 09:54:44.247684 2043876 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0804 09:54:44.298966 2043876 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 09:54:44.299110 2043876 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 09:54:44.299245 2043876 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 09:54:46.987804 2043876 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 09:54:46.989115 2043876 out.go:235]   - Generating certificates and keys ...
	I0804 09:54:46.989216 2043876 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 09:54:46.989336 2043876 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 09:54:47.222602 2043876 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0804 09:54:47.546904 2043876 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0804 09:54:48.014983 2043876 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0804 09:54:48.099416 2043876 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0804 09:54:48.656670 2043876 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0804 09:54:48.656802 2043876 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-768931] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0804 09:54:48.738804 2043876 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0804 09:54:48.738953 2043876 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-768931] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0804 09:54:49.348406 2043876 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0804 09:54:49.842598 2043876 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0804 09:54:50.267923 2043876 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0804 09:54:50.268038 2043876 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 09:54:50.669410 2043876 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 09:54:50.826443 2043876 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 09:54:50.919419 2043876 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 09:54:51.052335 2043876 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 09:54:51.154193 2043876 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 09:54:51.154812 2043876 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 09:54:51.157094 2043876 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 09:54:51.158898 2043876 out.go:235]   - Booting up control plane ...
	I0804 09:54:51.159011 2043876 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 09:54:51.159117 2043876 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 09:54:51.161096 2043876 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 09:54:51.170589 2043876 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 09:54:51.170739 2043876 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0804 09:54:51.176312 2043876 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0804 09:54:51.176501 2043876 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 09:54:51.176570 2043876 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 09:54:51.262593 2043876 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 09:54:51.262754 2043876 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 09:54:51.764351 2043876 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.749515ms
	I0804 09:54:51.767984 2043876 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0804 09:54:51.768100 2043876 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0804 09:54:51.768195 2043876 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0804 09:54:51.768287 2043876 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0804 09:54:54.430970 2043876 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.662701223s
	I0804 09:55:24.801310 2043876 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 33.033001382s
	I0804 09:58:51.768515 2043876 kubeadm.go:310] [control-plane-check] kube-apiserver is not healthy after 4m0.000135415s
	I0804 09:58:51.768563 2043876 kubeadm.go:310] 
	I0804 09:58:51.768663 2043876 kubeadm.go:310] A control plane component may have crashed or exited when started by the container runtime.
	I0804 09:58:51.768777 2043876 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 09:58:51.768912 2043876 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0804 09:58:51.769048 2043876 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	I0804 09:58:51.769172 2043876 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0804 09:58:51.769319 2043876 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	I0804 09:58:51.769337 2043876 kubeadm.go:310] 
	I0804 09:58:51.772868 2043876 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0804 09:58:51.773133 2043876 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0804 09:58:51.773235 2043876 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 09:58:51.773567 2043876 kubeadm.go:310] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 09:58:51.773651 2043876 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0804 09:58:51.773809 2043876 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-768931] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-768931] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.749515ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.662701223s
	[control-plane-check] kube-scheduler is healthy after 33.033001382s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000135415s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-768931] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-768931] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.749515ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.662701223s
	[control-plane-check] kube-scheduler is healthy after 33.033001382s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000135415s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	I0804 09:58:51.773863 2043876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0804 09:58:52.595043 2043876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 09:58:52.609675 2043876 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0804 09:58:52.609744 2043876 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 09:58:52.621401 2043876 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 09:58:52.621425 2043876 kubeadm.go:157] found existing configuration files:
	
	I0804 09:58:52.621468 2043876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 09:58:52.630604 2043876 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 09:58:52.630670 2043876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 09:58:52.638687 2043876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 09:58:52.646868 2043876 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 09:58:52.646923 2043876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 09:58:52.654721 2043876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 09:58:52.663350 2043876 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 09:58:52.663400 2043876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 09:58:52.673012 2043876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 09:58:52.683768 2043876 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 09:58:52.683813 2043876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 09:58:52.692846 2043876 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0804 09:58:52.733954 2043876 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0-beta.0
	I0804 09:58:52.734221 2043876 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 09:58:52.748248 2043876 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0804 09:58:52.748358 2043876 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0804 09:58:52.748446 2043876 kubeadm.go:310] OS: Linux
	I0804 09:58:52.748498 2043876 kubeadm.go:310] CGROUPS_CPU: enabled
	I0804 09:58:52.748552 2043876 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0804 09:58:52.748635 2043876 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0804 09:58:52.748719 2043876 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0804 09:58:52.748766 2043876 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0804 09:58:52.748852 2043876 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0804 09:58:52.748927 2043876 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0804 09:58:52.748998 2043876 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0804 09:58:52.749059 2043876 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0804 09:58:52.823697 2043876 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 09:58:52.823847 2043876 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 09:58:52.823962 2043876 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 09:58:52.834767 2043876 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 09:58:52.836621 2043876 out.go:235]   - Generating certificates and keys ...
	I0804 09:58:52.836728 2043876 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 09:58:52.836818 2043876 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 09:58:52.836941 2043876 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 09:58:52.837060 2043876 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 09:58:52.837188 2043876 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 09:58:52.837323 2043876 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 09:58:52.837428 2043876 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 09:58:52.837546 2043876 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 09:58:52.837676 2043876 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 09:58:52.837767 2043876 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 09:58:52.837801 2043876 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 09:58:52.837849 2043876 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 09:58:52.973814 2043876 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 09:58:53.255656 2043876 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 09:58:53.573028 2043876 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 09:58:53.624287 2043876 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 09:58:54.025840 2043876 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 09:58:54.027225 2043876 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 09:58:54.030233 2043876 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 09:58:54.032101 2043876 out.go:235]   - Booting up control plane ...
	I0804 09:58:54.032216 2043876 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 09:58:54.032324 2043876 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 09:58:54.033000 2043876 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 09:58:54.045991 2043876 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 09:58:54.046131 2043876 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0804 09:58:54.051777 2043876 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0804 09:58:54.052047 2043876 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 09:58:54.052120 2043876 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 09:58:54.140848 2043876 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 09:58:54.140998 2043876 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 09:58:55.142651 2043876 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001870324s
	I0804 09:58:55.148267 2043876 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0804 09:58:55.148386 2043876 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0804 09:58:55.148502 2043876 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0804 09:58:55.148608 2043876 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0804 09:58:57.461736 2043876 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.313255191s
	I0804 09:59:16.739852 2043876 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 21.591554618s
	I0804 10:02:55.149627 2043876 kubeadm.go:310] [control-plane-check] kube-apiserver is not healthy after 4m0.001045451s
	I0804 10:02:55.149679 2043876 kubeadm.go:310] 
	I0804 10:02:55.149785 2043876 kubeadm.go:310] A control plane component may have crashed or exited when started by the container runtime.
	I0804 10:02:55.149895 2043876 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 10:02:55.150043 2043876 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0804 10:02:55.150189 2043876 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	I0804 10:02:55.150279 2043876 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0804 10:02:55.150379 2043876 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	I0804 10:02:55.150403 2043876 kubeadm.go:310] 
	I0804 10:02:55.153829 2043876 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0804 10:02:55.154028 2043876 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0804 10:02:55.154155 2043876 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 10:02:55.154422 2043876 kubeadm.go:310] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I0804 10:02:55.154549 2043876 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 10:02:55.154591 2043876 kubeadm.go:394] duration metric: took 8m11.068253126s to StartCluster
	I0804 10:02:55.154645 2043876 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 10:02:55.154695 2043876 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 10:02:55.201604 2043876 cri.go:89] found id: "f0834226628433468b794ead5eab31267922b72e692aaf805c04f8d2bf702c4e"
	I0804 10:02:55.201625 2043876 cri.go:89] found id: ""
	I0804 10:02:55.201642 2043876 logs.go:282] 1 containers: [f0834226628433468b794ead5eab31267922b72e692aaf805c04f8d2bf702c4e]
	I0804 10:02:55.201700 2043876 ssh_runner.go:195] Run: which crictl
	I0804 10:02:55.205604 2043876 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 10:02:55.205673 2043876 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 10:02:55.253118 2043876 cri.go:89] found id: "ddbe74f0f15a666573981bde101ab8a0d2c7ddeb2039fc9f203c1cc9e6967958"
	I0804 10:02:55.253139 2043876 cri.go:89] found id: ""
	I0804 10:02:55.253150 2043876 logs.go:282] 1 containers: [ddbe74f0f15a666573981bde101ab8a0d2c7ddeb2039fc9f203c1cc9e6967958]
	I0804 10:02:55.253203 2043876 ssh_runner.go:195] Run: which crictl
	I0804 10:02:55.257125 2043876 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 10:02:55.257205 2043876 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 10:02:55.293299 2043876 cri.go:89] found id: ""
	I0804 10:02:55.293326 2043876 logs.go:282] 0 containers: []
	W0804 10:02:55.293334 2043876 logs.go:284] No container was found matching "coredns"
	I0804 10:02:55.293341 2043876 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 10:02:55.293397 2043876 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 10:02:55.328130 2043876 cri.go:89] found id: "89bc4723825bb82755771486260a4d7ab9160cb861617e71e4024502c3027ac8"
	I0804 10:02:55.328153 2043876 cri.go:89] found id: ""
	I0804 10:02:55.328164 2043876 logs.go:282] 1 containers: [89bc4723825bb82755771486260a4d7ab9160cb861617e71e4024502c3027ac8]
	I0804 10:02:55.328211 2043876 ssh_runner.go:195] Run: which crictl
	I0804 10:02:55.332000 2043876 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 10:02:55.332071 2043876 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 10:02:55.368381 2043876 cri.go:89] found id: ""
	I0804 10:02:55.368409 2043876 logs.go:282] 0 containers: []
	W0804 10:02:55.368417 2043876 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:02:55.368423 2043876 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 10:02:55.368484 2043876 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 10:02:55.407001 2043876 cri.go:89] found id: "2c908315dc7a602a91864c4061ef164fde7133946814a9a7e1c8bbab517e923d"
	I0804 10:02:55.407028 2043876 cri.go:89] found id: ""
	I0804 10:02:55.407038 2043876 logs.go:282] 1 containers: [2c908315dc7a602a91864c4061ef164fde7133946814a9a7e1c8bbab517e923d]
	I0804 10:02:55.407089 2043876 ssh_runner.go:195] Run: which crictl
	I0804 10:02:55.410625 2043876 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 10:02:55.410688 2043876 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 10:02:55.448458 2043876 cri.go:89] found id: ""
	I0804 10:02:55.448488 2043876 logs.go:282] 0 containers: []
	W0804 10:02:55.448498 2043876 logs.go:284] No container was found matching "kindnet"
	I0804 10:02:55.448518 2043876 logs.go:123] Gathering logs for dmesg ...
	I0804 10:02:55.448535 2043876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:02:55.475414 2043876 logs.go:123] Gathering logs for Docker ...
	I0804 10:02:55.475483 2043876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:02:55.510524 2043876 logs.go:123] Gathering logs for container status ...
	I0804 10:02:55.510639 2043876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:02:55.555794 2043876 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:02:55.555827 2043876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:02:55.615173 2043876 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:02:55.607390    6107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:02:55.607896    6107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:02:55.609613    6107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:02:55.610157    6107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:02:55.611764    6107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:02:55.607390    6107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:02:55.607896    6107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:02:55.609613    6107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:02:55.610157    6107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:02:55.611764    6107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:02:55.615207 2043876 logs.go:123] Gathering logs for kube-apiserver [f0834226628433468b794ead5eab31267922b72e692aaf805c04f8d2bf702c4e] ...
	I0804 10:02:55.615224 2043876 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0834226628433468b794ead5eab31267922b72e692aaf805c04f8d2bf702c4e"
	I0804 10:02:55.655555 2043876 logs.go:123] Gathering logs for etcd [ddbe74f0f15a666573981bde101ab8a0d2c7ddeb2039fc9f203c1cc9e6967958] ...
	I0804 10:02:55.655588 2043876 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbe74f0f15a666573981bde101ab8a0d2c7ddeb2039fc9f203c1cc9e6967958"
	I0804 10:02:55.692914 2043876 logs.go:123] Gathering logs for kube-scheduler [89bc4723825bb82755771486260a4d7ab9160cb861617e71e4024502c3027ac8] ...
	I0804 10:02:55.692954 2043876 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89bc4723825bb82755771486260a4d7ab9160cb861617e71e4024502c3027ac8"
	I0804 10:02:55.758030 2043876 logs.go:123] Gathering logs for kube-controller-manager [2c908315dc7a602a91864c4061ef164fde7133946814a9a7e1c8bbab517e923d] ...
	I0804 10:02:55.758062 2043876 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c908315dc7a602a91864c4061ef164fde7133946814a9a7e1c8bbab517e923d"
	I0804 10:02:55.800868 2043876 logs.go:123] Gathering logs for kubelet ...
	I0804 10:02:55.800894 2043876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 10:02:55.895371 2043876 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001870324s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.313255191s
	[control-plane-check] kube-scheduler is healthy after 21.591554618s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001045451s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	W0804 10:02:55.895450 2043876 out.go:270] * 
	* 
	W0804 10:02:55.895527 2043876 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001870324s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.313255191s
	[control-plane-check] kube-scheduler is healthy after 21.591554618s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001045451s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001870324s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.313255191s
	[control-plane-check] kube-scheduler is healthy after 21.591554618s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001045451s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 10:02:55.895581 2043876 out.go:270] * 
	* 
	W0804 10:02:55.897648 2043876 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 10:02:55.972032 2043876 out.go:201] 
	W0804 10:02:56.013350 2043876 out.go:270] X Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001870324s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.313255191s
	[control-plane-check] kube-scheduler is healthy after 21.591554618s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001045451s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1083-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001870324s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.313255191s
	[control-plane-check] kube-scheduler is healthy after 21.591554618s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001045451s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 10:02:56.013387 2043876 out.go:270] * 
	* 
	W0804 10:02:56.016241 2043876 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 10:02:56.096818 2043876 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p newest-cni-768931 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-768931
helpers_test.go:235: (dbg) docker inspect newest-cni-768931:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd",
	        "Created": "2025-08-04T09:54:35.028106074Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2044476,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T09:54:35.059003916Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/hostname",
	        "HostsPath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/hosts",
	        "LogPath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd-json.log",
	        "Name": "/newest-cni-768931",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-768931:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-768931",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd",
	                "LowerDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-768931",
	                "Source": "/var/lib/docker/volumes/newest-cni-768931/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-768931",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-768931",
	                "name.minikube.sigs.k8s.io": "newest-cni-768931",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3b4b2f328c32b1b529e48931db8c3a52a04d51036ef15a3d11f31a213d0b35c4",
	            "SandboxKey": "/var/run/docker/netns/3b4b2f328c32",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-768931": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:f1:21:82:1c:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b469f2b8beae070883e49bfb67a442aa4bbac8703dfdd341c34c8d2ed3e42c07",
	                    "EndpointID": "5795aa26c98a72643fa4242fef06c4e0a9513f3ef2abc025fa9e5dc584931da2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-768931",
	                        "056ddd51825a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-768931 -n newest-cni-768931
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-768931 -n newest-cni-768931: exit status 6 (296.872267ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 10:02:56.501490 2138551 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-768931" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "newest-cni-768931" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (505.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-499486 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-499486 create -f testdata/busybox.yaml: exit status 1 (43.062189ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-499486" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-499486 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-499486
helpers_test.go:235: (dbg) docker inspect no-preload-499486:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a",
	        "Created": "2025-08-04T09:53:15.660442354Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2029936,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T09:53:15.69127721Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/hostname",
	        "HostsPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/hosts",
	        "LogPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a-json.log",
	        "Name": "/no-preload-499486",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-499486:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-499486",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a",
	                "LowerDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-499486",
	                "Source": "/var/lib/docker/volumes/no-preload-499486/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-499486",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-499486",
	                "name.minikube.sigs.k8s.io": "no-preload-499486",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d4bc85e8d71c1f7b19543fb7f72dfa5ec983493b724ce990a1931d665bf24114",
	            "SandboxKey": "/var/run/docker/netns/d4bc85e8d71c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-499486": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:7c:aa:5e:3e:a0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b62d1a98319626f2ebd728777c7c3c44586a7c69bc74cc1eeb93ee4ca2df5d38",
	                    "EndpointID": "5d5bec790cc22478e1fe74ad8dd7d943661e5a0fe9f47479f30e041ca21c6066",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-499486",
	                        "cdcf9a40640c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499486 -n no-preload-499486
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499486 -n no-preload-499486: exit status 6 (268.865872ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 10:01:58.158273 2127452 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-499486" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-499486" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-499486
helpers_test.go:235: (dbg) docker inspect no-preload-499486:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a",
	        "Created": "2025-08-04T09:53:15.660442354Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2029936,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T09:53:15.69127721Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/hostname",
	        "HostsPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/hosts",
	        "LogPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a-json.log",
	        "Name": "/no-preload-499486",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-499486:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-499486",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a",
	                "LowerDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-499486",
	                "Source": "/var/lib/docker/volumes/no-preload-499486/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-499486",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-499486",
	                "name.minikube.sigs.k8s.io": "no-preload-499486",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d4bc85e8d71c1f7b19543fb7f72dfa5ec983493b724ce990a1931d665bf24114",
	            "SandboxKey": "/var/run/docker/netns/d4bc85e8d71c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-499486": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:7c:aa:5e:3e:a0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b62d1a98319626f2ebd728777c7c3c44586a7c69bc74cc1eeb93ee4ca2df5d38",
	                    "EndpointID": "5d5bec790cc22478e1fe74ad8dd7d943661e5a0fe9f47479f30e041ca21c6066",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-499486",
	                        "cdcf9a40640c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499486 -n no-preload-499486
E0804 10:01:58.363721 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499486 -n no-preload-499486: exit status 6 (273.702179ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 10:01:58.450804 2127563 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-499486" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-499486" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (95.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-499486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-499486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m35.503397144s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_3.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-499486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-499486 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-499486 describe deploy/metrics-server -n kube-system: exit status 1 (44.549488ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-499486" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-499486 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-499486
helpers_test.go:235: (dbg) docker inspect no-preload-499486:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a",
	        "Created": "2025-08-04T09:53:15.660442354Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2029936,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T09:53:15.69127721Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/hostname",
	        "HostsPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/hosts",
	        "LogPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a-json.log",
	        "Name": "/no-preload-499486",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-499486:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-499486",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a",
	                "LowerDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-499486",
	                "Source": "/var/lib/docker/volumes/no-preload-499486/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-499486",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-499486",
	                "name.minikube.sigs.k8s.io": "no-preload-499486",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d4bc85e8d71c1f7b19543fb7f72dfa5ec983493b724ce990a1931d665bf24114",
	            "SandboxKey": "/var/run/docker/netns/d4bc85e8d71c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-499486": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:7c:aa:5e:3e:a0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b62d1a98319626f2ebd728777c7c3c44586a7c69bc74cc1eeb93ee4ca2df5d38",
	                    "EndpointID": "5d5bec790cc22478e1fe74ad8dd7d943661e5a0fe9f47479f30e041ca21c6066",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-499486",
	                        "cdcf9a40640c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499486 -n no-preload-499486
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499486 -n no-preload-499486: exit status 6 (274.78686ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 10:03:34.293895 2149135 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-499486" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-499486" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (95.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (94.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-768931 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-768931 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m34.34791671s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_3.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-768931 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-768931
helpers_test.go:235: (dbg) docker inspect newest-cni-768931:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd",
	        "Created": "2025-08-04T09:54:35.028106074Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2044476,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T09:54:35.059003916Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/hostname",
	        "HostsPath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/hosts",
	        "LogPath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd-json.log",
	        "Name": "/newest-cni-768931",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-768931:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-768931",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd",
	                "LowerDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-768931",
	                "Source": "/var/lib/docker/volumes/newest-cni-768931/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-768931",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-768931",
	                "name.minikube.sigs.k8s.io": "newest-cni-768931",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3b4b2f328c32b1b529e48931db8c3a52a04d51036ef15a3d11f31a213d0b35c4",
	            "SandboxKey": "/var/run/docker/netns/3b4b2f328c32",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-768931": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:f1:21:82:1c:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b469f2b8beae070883e49bfb67a442aa4bbac8703dfdd341c34c8d2ed3e42c07",
	                    "EndpointID": "5795aa26c98a72643fa4242fef06c4e0a9513f3ef2abc025fa9e5dc584931da2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-768931",
	                        "056ddd51825a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-768931 -n newest-cni-768931
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-768931 -n newest-cni-768931: exit status 6 (277.229557ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 10:04:31.157477 2162669 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-768931" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "newest-cni-768931" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (94.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (371.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-499486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0
E0804 10:03:53.892173 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:53.898519 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:53.909894 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:53.931213 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:53.972727 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:54.054466 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:54.216449 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:54.538275 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:54.950134 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:55.179812 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:55.497617 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:55.503970 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:55.515326 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:55.536687 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:55.578047 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:55.659459 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:55.820958 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:56.142432 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:56.461966 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:56.784340 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:58.066545 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:03:59.023348 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-499486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0: exit status 80 (6m9.735302702s)

                                                
                                                
-- stdout --
	* [no-preload-499486] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-499486" primary control-plane node in "no-preload-499486" cluster
	* Pulling base image v0.0.47-1753871403-21198 ...
	* Restarting existing docker container for "no-preload-499486" ...
	* Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 10:03:35.704441 2149628 out.go:345] Setting OutFile to fd 1 ...
	I0804 10:03:35.704889 2149628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 10:03:35.704907 2149628 out.go:358] Setting ErrFile to fd 2...
	I0804 10:03:35.704915 2149628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 10:03:35.705396 2149628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 10:03:35.706181 2149628 out.go:352] Setting JSON to false
	I0804 10:03:35.707575 2149628 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":153905,"bootTime":1754147911,"procs":346,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 10:03:35.707681 2149628 start.go:140] virtualization: kvm guest
	I0804 10:03:35.709197 2149628 out.go:177] * [no-preload-499486] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 10:03:35.710696 2149628 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 10:03:35.710827 2149628 notify.go:220] Checking for updates...
	I0804 10:03:35.712792 2149628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 10:03:35.713837 2149628 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:03:35.714796 2149628 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 10:03:35.715815 2149628 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 10:03:35.716888 2149628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 10:03:35.718272 2149628 config.go:182] Loaded profile config "no-preload-499486": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:03:35.718747 2149628 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 10:03:35.741393 2149628 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 10:03:35.741493 2149628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 10:03:35.792541 2149628 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:73 SystemTime:2025-08-04 10:03:35.782544854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 10:03:35.792648 2149628 docker.go:318] overlay module found
	I0804 10:03:35.794196 2149628 out.go:177] * Using the docker driver based on existing profile
	I0804 10:03:35.795304 2149628 start.go:304] selected driver: docker
	I0804 10:03:35.795326 2149628 start.go:918] validating driver "docker" against &{Name:no-preload-499486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:no-preload-499486 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:03:35.795428 2149628 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 10:03:35.796253 2149628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 10:03:35.844858 2149628 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:73 SystemTime:2025-08-04 10:03:35.836006894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 10:03:35.845295 2149628 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 10:03:35.845341 2149628 cni.go:84] Creating CNI manager for ""
	I0804 10:03:35.845423 2149628 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 10:03:35.845489 2149628 start.go:348] cluster config:
	{Name:no-preload-499486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:no-preload-499486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:03:35.846981 2149628 out.go:177] * Starting "no-preload-499486" primary control-plane node in "no-preload-499486" cluster
	I0804 10:03:35.847912 2149628 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 10:03:35.848827 2149628 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 10:03:35.849684 2149628 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 10:03:35.849786 2149628 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 10:03:35.849846 2149628 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/config.json ...
	I0804 10:03:35.850003 2149628 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:03:35.872060 2149628 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 10:03:35.872091 2149628 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 10:03:35.872111 2149628 cache.go:230] Successfully downloaded all kic artifacts
	I0804 10:03:35.872149 2149628 start.go:360] acquireMachinesLock for no-preload-499486: {Name:mk37c51365b17ced600d568c1425a7f58dbdcfcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 10:03:35.872208 2149628 start.go:364] duration metric: took 38.621µs to acquireMachinesLock for "no-preload-499486"
	I0804 10:03:35.872226 2149628 start.go:96] Skipping create...Using existing machine configuration
	I0804 10:03:35.872232 2149628 fix.go:54] fixHost starting: 
	I0804 10:03:35.872532 2149628 cli_runner.go:164] Run: docker container inspect no-preload-499486 --format={{.State.Status}}
	I0804 10:03:35.892105 2149628 fix.go:112] recreateIfNeeded on no-preload-499486: state=Stopped err=<nil>
	W0804 10:03:35.892147 2149628 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 10:03:35.893836 2149628 out.go:177] * Restarting existing docker container for "no-preload-499486" ...
	I0804 10:03:35.894882 2149628 cli_runner.go:164] Run: docker start no-preload-499486
	I0804 10:03:36.126892 2149628 cli_runner.go:164] Run: docker container inspect no-preload-499486 --format={{.State.Status}}
	I0804 10:03:36.147249 2149628 kic.go:430] container "no-preload-499486" state is running.
	I0804 10:03:36.147755 2149628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-499486
	I0804 10:03:36.167138 2149628 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/config.json ...
	I0804 10:03:36.167351 2149628 machine.go:93] provisionDockerMachine start ...
	I0804 10:03:36.167414 2149628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 10:03:36.185587 2149628 main.go:141] libmachine: Using SSH client type: native
	I0804 10:03:36.185929 2149628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0804 10:03:36.185950 2149628 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 10:03:36.186657 2149628 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60948->127.0.0.1:33164: read: connection reset by peer
	I0804 10:03:36.236382 2149628 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:03:36.624180 2149628 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:03:37.033825 2149628 cache.go:107] acquiring lock: {Name:mkf6bf097f9b4ab85114a6fa38ad13bfc2488603 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 10:03:37.033818 2149628 cache.go:107] acquiring lock: {Name:mkcb7c5aa46ee6392f69a29d6d1585a5e7488cd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 10:03:37.033882 2149628 cache.go:107] acquiring lock: {Name:mk33ce9e689d2e467401f7efa84455ad3f2e92ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 10:03:37.033863 2149628 cache.go:107] acquiring lock: {Name:mk9f4291ac7cb8894a58bf7b28674291cc899ed9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 10:03:37.033948 2149628 cache.go:115] /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0-beta.0 exists
	I0804 10:03:37.033818 2149628 cache.go:107] acquiring lock: {Name:mka423fb18126d40f4a4f7fca8ec6e3e41082638 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 10:03:37.033844 2149628 cache.go:107] acquiring lock: {Name:mkfd5a21bbd2e3fa848283c303b92221b810b9b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 10:03:37.033864 2149628 cache.go:107] acquiring lock: {Name:mkfef881a264b8a3a60f6a6f0c24e47a08186ce5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 10:03:37.034001 2149628 cache.go:107] acquiring lock: {Name:mk7ddaf4fc877a751da8cfe2ede1952cd2ef0b12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 10:03:37.034077 2149628 cache.go:115] /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0804 10:03:37.034081 2149628 cache.go:115] /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0-beta.0 exists
	I0804 10:03:37.034083 2149628 cache.go:115] /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.21-0 exists
	I0804 10:03:37.034086 2149628 cache.go:115] /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0804 10:03:37.034093 2149628 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 230.298µs
	I0804 10:03:37.034098 2149628 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0-beta.0" -> "/home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0-beta.0" took 237.022µs
	I0804 10:03:37.034100 2149628 cache.go:96] cache image "registry.k8s.io/etcd:3.5.21-0" -> "/home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.21-0" took 262.949µs
	I0804 10:03:37.034107 2149628 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0804 10:03:37.034110 2149628 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0-beta.0 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0-beta.0 succeeded
	I0804 10:03:37.034111 2149628 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.21-0 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.21-0 succeeded
	I0804 10:03:37.034110 2149628 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 244.243µs
	I0804 10:03:37.033970 2149628 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0-beta.0" -> "/home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0-beta.0" took 150.498µs
	I0804 10:03:37.034122 2149628 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0804 10:03:37.034131 2149628 cache.go:115] /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0804 10:03:37.034146 2149628 cache.go:115] /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0-beta.0 exists
	I0804 10:03:37.034155 2149628 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 363.128µs
	I0804 10:03:37.034163 2149628 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0804 10:03:37.034162 2149628 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0-beta.0" -> "/home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0-beta.0" took 364.353µs
	I0804 10:03:37.034176 2149628 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0-beta.0 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0-beta.0 succeeded
	I0804 10:03:37.034134 2149628 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0-beta.0 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0-beta.0 succeeded
	I0804 10:03:37.034168 2149628 cache.go:115] /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0-beta.0 exists
	I0804 10:03:37.034199 2149628 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0-beta.0" -> "/home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0-beta.0" took 233.102µs
	I0804 10:03:37.034220 2149628 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0-beta.0 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0-beta.0 succeeded
	I0804 10:03:37.034237 2149628 cache.go:87] Successfully saved all images to host disk.
	I0804 10:03:39.313075 2149628 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-499486
	
	I0804 10:03:39.313113 2149628 ubuntu.go:169] provisioning hostname "no-preload-499486"
	I0804 10:03:39.313186 2149628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 10:03:39.331575 2149628 main.go:141] libmachine: Using SSH client type: native
	I0804 10:03:39.331782 2149628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0804 10:03:39.331795 2149628 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-499486 && echo "no-preload-499486" | sudo tee /etc/hostname
	I0804 10:03:39.468449 2149628 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-499486
	
	I0804 10:03:39.468518 2149628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 10:03:39.486390 2149628 main.go:141] libmachine: Using SSH client type: native
	I0804 10:03:39.486664 2149628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0804 10:03:39.486692 2149628 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-499486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-499486/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-499486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 10:03:39.610149 2149628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 10:03:39.610184 2149628 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 10:03:39.610203 2149628 ubuntu.go:177] setting up certificates
	I0804 10:03:39.610230 2149628 provision.go:84] configureAuth start
	I0804 10:03:39.610328 2149628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-499486
	I0804 10:03:39.629322 2149628 provision.go:143] copyHostCerts
	I0804 10:03:39.629388 2149628 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 10:03:39.629399 2149628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 10:03:39.629473 2149628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 10:03:39.629572 2149628 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 10:03:39.629582 2149628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 10:03:39.629606 2149628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 10:03:39.629670 2149628 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 10:03:39.629678 2149628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 10:03:39.629698 2149628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 10:03:39.629758 2149628 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.no-preload-499486 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-499486]
	I0804 10:03:39.672180 2149628 provision.go:177] copyRemoteCerts
	I0804 10:03:39.672236 2149628 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 10:03:39.672275 2149628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 10:03:39.690385 2149628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/no-preload-499486/id_rsa Username:docker}
	I0804 10:03:39.786047 2149628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 10:03:39.808577 2149628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 10:03:39.830812 2149628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 10:03:39.852801 2149628 provision.go:87] duration metric: took 242.55153ms to configureAuth
	I0804 10:03:39.852836 2149628 ubuntu.go:193] setting minikube options for container-runtime
	I0804 10:03:39.853047 2149628 config.go:182] Loaded profile config "no-preload-499486": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:03:39.853095 2149628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 10:03:39.870828 2149628 main.go:141] libmachine: Using SSH client type: native
	I0804 10:03:39.871068 2149628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0804 10:03:39.871083 2149628 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 10:03:39.997809 2149628 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 10:03:39.997839 2149628 ubuntu.go:71] root file system type: overlay
	I0804 10:03:39.997969 2149628 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 10:03:39.998060 2149628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 10:03:40.016340 2149628 main.go:141] libmachine: Using SSH client type: native
	I0804 10:03:40.016581 2149628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0804 10:03:40.016679 2149628 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 10:03:40.152712 2149628 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 10:03:40.152827 2149628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 10:03:40.173209 2149628 main.go:141] libmachine: Using SSH client type: native
	I0804 10:03:40.173476 2149628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0804 10:03:40.173494 2149628 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 10:03:40.302725 2149628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 10:03:40.302761 2149628 machine.go:96] duration metric: took 4.13539512s to provisionDockerMachine
	I0804 10:03:40.302774 2149628 start.go:293] postStartSetup for "no-preload-499486" (driver="docker")
	I0804 10:03:40.302787 2149628 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 10:03:40.302859 2149628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 10:03:40.302918 2149628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 10:03:40.320673 2149628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/no-preload-499486/id_rsa Username:docker}
	I0804 10:03:40.410193 2149628 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 10:03:40.413229 2149628 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 10:03:40.413316 2149628 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 10:03:40.413328 2149628 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 10:03:40.413337 2149628 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 10:03:40.413347 2149628 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 10:03:40.413401 2149628 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 10:03:40.413485 2149628 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 10:03:40.413577 2149628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 10:03:40.421730 2149628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 10:03:40.443272 2149628 start.go:296] duration metric: took 140.483871ms for postStartSetup
	I0804 10:03:40.443344 2149628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 10:03:40.443377 2149628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 10:03:40.460574 2149628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/no-preload-499486/id_rsa Username:docker}
	I0804 10:03:40.550032 2149628 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 10:03:40.554354 2149628 fix.go:56] duration metric: took 4.682115714s for fixHost
	I0804 10:03:40.554386 2149628 start.go:83] releasing machines lock for "no-preload-499486", held for 4.682169477s
	I0804 10:03:40.554465 2149628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-499486
	I0804 10:03:40.572150 2149628 ssh_runner.go:195] Run: cat /version.json
	I0804 10:03:40.572193 2149628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 10:03:40.572237 2149628 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 10:03:40.572323 2149628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 10:03:40.590043 2149628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/no-preload-499486/id_rsa Username:docker}
	I0804 10:03:40.590043 2149628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/no-preload-499486/id_rsa Username:docker}
	I0804 10:03:40.760126 2149628 ssh_runner.go:195] Run: systemctl --version
	I0804 10:03:40.765183 2149628 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 10:03:40.769598 2149628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 10:03:40.787369 2149628 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 10:03:40.787440 2149628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 10:03:40.795746 2149628 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 10:03:40.795772 2149628 start.go:495] detecting cgroup driver to use...
	I0804 10:03:40.795809 2149628 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 10:03:40.795920 2149628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 10:03:40.810616 2149628 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:03:41.218154 2149628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 10:03:41.228878 2149628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 10:03:41.237893 2149628 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 10:03:41.237939 2149628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 10:03:41.246796 2149628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 10:03:41.255374 2149628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 10:03:41.263906 2149628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 10:03:41.272774 2149628 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 10:03:41.281317 2149628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 10:03:41.290264 2149628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 10:03:41.299053 2149628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 10:03:41.308042 2149628 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 10:03:41.315447 2149628 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 10:03:41.323026 2149628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:03:41.401274 2149628 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 10:03:41.494057 2149628 start.go:495] detecting cgroup driver to use...
	I0804 10:03:41.494111 2149628 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 10:03:41.494153 2149628 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 10:03:41.506513 2149628 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 10:03:41.506593 2149628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 10:03:41.519007 2149628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 10:03:41.537415 2149628 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:03:41.958652 2149628 ssh_runner.go:195] Run: which cri-dockerd
	I0804 10:03:41.962747 2149628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 10:03:41.971006 2149628 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 10:03:41.988241 2149628 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 10:03:42.068015 2149628 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 10:03:42.146442 2149628 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 10:03:42.146588 2149628 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 10:03:42.164489 2149628 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 10:03:42.174964 2149628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:03:42.250096 2149628 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 10:03:42.573440 2149628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 10:03:42.584387 2149628 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0804 10:03:42.595808 2149628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 10:03:42.606369 2149628 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 10:03:42.681607 2149628 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 10:03:42.754770 2149628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:03:42.833546 2149628 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 10:03:42.846780 2149628 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 10:03:42.856938 2149628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:03:42.933626 2149628 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 10:03:42.997348 2149628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 10:03:43.008700 2149628 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 10:03:43.008763 2149628 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 10:03:43.012020 2149628 start.go:563] Will wait 60s for crictl version
	I0804 10:03:43.012067 2149628 ssh_runner.go:195] Run: which crictl
	I0804 10:03:43.015165 2149628 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 10:03:43.049052 2149628 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 10:03:43.049140 2149628 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 10:03:43.073876 2149628 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 10:03:43.099571 2149628 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 10:03:43.099681 2149628 cli_runner.go:164] Run: docker network inspect no-preload-499486 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 10:03:43.117183 2149628 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0804 10:03:43.120738 2149628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 10:03:43.131265 2149628 kubeadm.go:875] updating cluster {Name:no-preload-499486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:no-preload-499486 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 10:03:43.131449 2149628 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:03:43.538104 2149628 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:03:43.945454 2149628 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:03:44.328916 2149628 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 10:03:44.328984 2149628 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 10:03:44.349967 2149628 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 10:03:44.349995 2149628 cache_images.go:85] Images are preloaded, skipping loading
	I0804 10:03:44.350007 2149628 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0-beta.0 docker true true} ...
	I0804 10:03:44.350152 2149628 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-499486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:no-preload-499486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 10:03:44.350224 2149628 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 10:03:44.400058 2149628 cni.go:84] Creating CNI manager for ""
	I0804 10:03:44.400088 2149628 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 10:03:44.400101 2149628 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 10:03:44.400140 2149628 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-499486 NodeName:no-preload-499486 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 10:03:44.400320 2149628 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-499486"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 10:03:44.400393 2149628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 10:03:44.409470 2149628 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 10:03:44.409533 2149628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 10:03:44.417832 2149628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0804 10:03:44.434815 2149628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 10:03:44.451932 2149628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0804 10:03:44.468307 2149628 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0804 10:03:44.471527 2149628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 10:03:44.482924 2149628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:03:44.565642 2149628 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 10:03:44.579231 2149628 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486 for IP: 192.168.94.2
	I0804 10:03:44.579254 2149628 certs.go:194] generating shared ca certs ...
	I0804 10:03:44.579300 2149628 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:03:44.579439 2149628 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 10:03:44.579486 2149628 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 10:03:44.579495 2149628 certs.go:256] generating profile certs ...
	I0804 10:03:44.579581 2149628 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/client.key
	I0804 10:03:44.579623 2149628 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/apiserver.key.e2e5da35
	I0804 10:03:44.579657 2149628 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/proxy-client.key
	I0804 10:03:44.579756 2149628 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 10:03:44.579785 2149628 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 10:03:44.579795 2149628 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 10:03:44.579816 2149628 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 10:03:44.579838 2149628 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 10:03:44.579859 2149628 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 10:03:44.579897 2149628 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 10:03:44.580510 2149628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 10:03:44.603825 2149628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 10:03:44.630560 2149628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 10:03:44.673465 2149628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 10:03:44.761797 2149628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 10:03:44.790552 2149628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 10:03:44.812685 2149628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 10:03:44.836183 2149628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/no-preload-499486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 10:03:44.858393 2149628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 10:03:44.880538 2149628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 10:03:44.903084 2149628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 10:03:44.925077 2149628 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 10:03:44.941014 2149628 ssh_runner.go:195] Run: openssl version
	I0804 10:03:44.946074 2149628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 10:03:44.954491 2149628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:03:44.957665 2149628 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:03:44.957713 2149628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:03:44.964073 2149628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 10:03:44.971992 2149628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 10:03:44.981433 2149628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 10:03:44.984688 2149628 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 10:03:44.984735 2149628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 10:03:44.992066 2149628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 10:03:45.000883 2149628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 10:03:45.009692 2149628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 10:03:45.012783 2149628 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 10:03:45.012837 2149628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 10:03:45.019576 2149628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 10:03:45.027950 2149628 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 10:03:45.031291 2149628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 10:03:45.037300 2149628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 10:03:45.043418 2149628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 10:03:45.049509 2149628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 10:03:45.055569 2149628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 10:03:45.061745 2149628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 10:03:45.068460 2149628 kubeadm.go:392] StartCluster: {Name:no-preload-499486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:no-preload-499486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:03:45.068587 2149628 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 10:03:45.086943 2149628 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 10:03:45.097491 2149628 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 10:03:45.097509 2149628 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0804 10:03:45.097542 2149628 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 10:03:45.106900 2149628 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 10:03:45.107876 2149628 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-499486" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:03:45.108248 2149628 kubeconfig.go:62] /home/jenkins/minikube-integration/21223-1578987/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-499486" cluster setting kubeconfig missing "no-preload-499486" context setting]
	I0804 10:03:45.108809 2149628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:03:45.110367 2149628 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 10:03:45.120664 2149628 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0804 10:03:45.120700 2149628 kubeadm.go:593] duration metric: took 23.186499ms to restartPrimaryControlPlane
	I0804 10:03:45.120710 2149628 kubeadm.go:394] duration metric: took 52.258323ms to StartCluster
	I0804 10:03:45.120731 2149628 settings.go:142] acquiring lock: {Name:mk3d97f9903fe59355ed92bb92489c9b9834574a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:03:45.120804 2149628 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:03:45.121941 2149628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:03:45.122548 2149628 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 10:03:45.122676 2149628 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 10:03:45.122808 2149628 addons.go:69] Setting storage-provisioner=true in profile "no-preload-499486"
	I0804 10:03:45.122834 2149628 addons.go:238] Setting addon storage-provisioner=true in "no-preload-499486"
	I0804 10:03:45.122846 2149628 config.go:182] Loaded profile config "no-preload-499486": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:03:45.122859 2149628 addons.go:69] Setting dashboard=true in profile "no-preload-499486"
	I0804 10:03:45.122898 2149628 host.go:66] Checking if "no-preload-499486" exists ...
	I0804 10:03:45.122917 2149628 addons.go:69] Setting default-storageclass=true in profile "no-preload-499486"
	I0804 10:03:45.122930 2149628 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-499486"
	I0804 10:03:45.122905 2149628 addons.go:238] Setting addon dashboard=true in "no-preload-499486"
	W0804 10:03:45.123060 2149628 addons.go:247] addon dashboard should already be in state true
	I0804 10:03:45.123095 2149628 host.go:66] Checking if "no-preload-499486" exists ...
	I0804 10:03:45.123268 2149628 cli_runner.go:164] Run: docker container inspect no-preload-499486 --format={{.State.Status}}
	I0804 10:03:45.123482 2149628 cli_runner.go:164] Run: docker container inspect no-preload-499486 --format={{.State.Status}}
	I0804 10:03:45.123593 2149628 cli_runner.go:164] Run: docker container inspect no-preload-499486 --format={{.State.Status}}
	I0804 10:03:45.125150 2149628 out.go:177] * Verifying Kubernetes components...
	I0804 10:03:45.126560 2149628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:03:45.147826 2149628 addons.go:238] Setting addon default-storageclass=true in "no-preload-499486"
	I0804 10:03:45.147884 2149628 host.go:66] Checking if "no-preload-499486" exists ...
	I0804 10:03:45.148360 2149628 cli_runner.go:164] Run: docker container inspect no-preload-499486 --format={{.State.Status}}
	I0804 10:03:45.151756 2149628 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0804 10:03:45.152990 2149628 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 10:03:45.153049 2149628 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0804 10:03:45.154364 2149628 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0804 10:03:45.154386 2149628 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0804 10:03:45.154424 2149628 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:03:45.154444 2149628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 10:03:45.154445 2149628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 10:03:45.154499 2149628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 10:03:45.182975 2149628 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 10:03:45.183011 2149628 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 10:03:45.183074 2149628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-499486
	I0804 10:03:45.188013 2149628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/no-preload-499486/id_rsa Username:docker}
	I0804 10:03:45.194395 2149628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/no-preload-499486/id_rsa Username:docker}
	I0804 10:03:45.209893 2149628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/no-preload-499486/id_rsa Username:docker}
	I0804 10:03:45.368138 2149628 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 10:03:45.382516 2149628 node_ready.go:35] waiting up to 6m0s for node "no-preload-499486" to be "Ready" ...
	I0804 10:03:45.486429 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:03:45.487086 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:03:45.591072 2149628 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0804 10:03:45.591161 2149628 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0804 10:03:45.683655 2149628 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0804 10:03:45.683686 2149628 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W0804 10:03:45.779195 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:03:45.779238 2149628 retry.go:31] will retry after 198.798267ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:03:45.779319 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:03:45.779330 2149628 retry.go:31] will retry after 242.216132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:03:45.783486 2149628 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0804 10:03:45.783513 2149628 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0804 10:03:45.875487 2149628 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0804 10:03:45.875513 2149628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0804 10:03:45.898888 2149628 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0804 10:03:45.898923 2149628 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0804 10:03:45.978422 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:03:46.022288 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:03:46.064143 2149628 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0804 10:03:46.064175 2149628 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0804 10:03:46.170864 2149628 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0804 10:03:46.170894 2149628 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0804 10:03:46.275947 2149628 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0804 10:03:46.275974 2149628 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0804 10:03:46.296008 2149628 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:03:46.296031 2149628 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0804 10:03:46.380797 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:03:56.383614 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:04:06.287364 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (20.308892663s)
	W0804 10:04:06.287437 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:06.287459 2149628 retry.go:31] will retry after 553.97886ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:06.287460 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (20.265133125s)
	W0804 10:04:06.287494 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:06.287508 2149628 retry.go:31] will retry after 553.01831ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:06.384923 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:04:06.507617 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (20.126769599s)
	W0804 10:04:06.507690 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:06.507714 2149628 retry.go:31] will retry after 312.703741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:06.821315 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:04:06.841669 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:06.841676 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:04:08.498660 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.94.1:35224->192.168.94.2:8443: read: connection reset by peer
	I0804 10:04:08.510099 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.68870718s)
	W0804 10:04:08.510151 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:08.510176 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.668458086s)
	W0804 10:04:08.510217 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:08.510229 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.668450472s)
	I0804 10:04:08.510240 2149628 retry.go:31] will retry after 631.685144ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:08.510188 2149628 retry.go:31] will retry after 481.325601ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:08.510258 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:08.510275 2149628 retry.go:31] will retry after 483.752988ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:08.992183 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:04:08.994450 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:04:09.050792 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:09.050821 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:09.050838 2149628 retry.go:31] will retry after 1.158997498s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:09.050836 2149628 retry.go:31] will retry after 654.989198ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:09.142958 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:04:09.208100 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:09.208143 2149628 retry.go:31] will retry after 636.703126ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:09.706137 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:04:09.764169 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:09.764213 2149628 retry.go:31] will retry after 448.109614ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:09.845157 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:04:09.905777 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:09.905811 2149628 retry.go:31] will retry after 880.776119ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:10.210129 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:10.213438 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:04:10.267020 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:10.267052 2149628 retry.go:31] will retry after 1.647090663s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:10.267871 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:10.267896 2149628 retry.go:31] will retry after 1.656748604s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:10.786913 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:04:10.838658 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:10.838691 2149628 retry.go:31] will retry after 2.235593198s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:10.883110 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:11.915010 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:11.925431 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:04:11.976297 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:11.976330 2149628 retry.go:31] will retry after 1.691299565s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:11.987393 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:11.987427 2149628 retry.go:31] will retry after 2.025007357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:13.075327 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:04:13.128442 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:13.128472 2149628 retry.go:31] will retry after 3.26438914s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:13.383110 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:13.668508 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:04:13.720756 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:13.720786 2149628 retry.go:31] will retry after 1.821392634s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:14.012620 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:04:14.067297 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:14.067331 2149628 retry.go:31] will retry after 4.191888651s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:15.384053 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:15.543369 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:04:15.603129 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:15.603174 2149628 retry.go:31] will retry after 3.263291622s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:16.393420 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:04:16.450924 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:16.450956 2149628 retry.go:31] will retry after 4.577343454s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:17.883977 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:18.259345 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:04:18.319962 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:18.320000 2149628 retry.go:31] will retry after 2.893549664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:18.867078 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:04:18.924044 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:18.924081 2149628 retry.go:31] will retry after 5.86617092s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:20.383389 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:21.029394 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:04:21.099334 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:21.099373 2149628 retry.go:31] will retry after 6.141051832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:21.214604 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:04:21.280479 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:21.280523 2149628 retry.go:31] will retry after 8.864592748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:22.883221 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:24.792908 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:27.240590 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:30.145453 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:04:33.885027 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	W0804 10:04:43.886614 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:04:45.817807 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (21.02485888s)
	W0804 10:04:45.817865 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47830->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817882 2149628 retry.go:31] will retry after 7.331884675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47830->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817886 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (18.577242103s)
	W0804 10:04:45.817921 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47842->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817941 2149628 retry.go:31] will retry after 8.626487085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47842->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.819147 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (15.673641591s)
	W0804 10:04:45.819203 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.819221 2149628 retry.go:31] will retry after 10.775617277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:46.383837 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:04:48.883614 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:04:51.383255 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:53.150556 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:04:53.202901 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:53.202938 2149628 retry.go:31] will retry after 10.556999875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:53.383788 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:54.445142 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:04:54.496071 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:54.496106 2149628 retry.go:31] will retry after 19.784775984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:55.384040 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:56.595610 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:04:56.648210 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:56.648246 2149628 retry.go:31] will retry after 19.28607151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:57.883186 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:04:59.883484 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:02.383555 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:03.761004 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:03.814105 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:03.814138 2149628 retry.go:31] will retry after 18.372442886s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:04.883286 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:07.384053 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:14.284114 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:05:15.935132 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:17.885492 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:05:22.187953 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:27.887543 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:05:29.439408 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (15.15524051s)
	W0804 10:05:29.439455 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45456->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:29.439566 2149628 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45456->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45456->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:05:29.441507 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (13.506331682s)
	W0804 10:05:29.441560 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:29.441583 2149628 retry.go:31] will retry after 14.271169565s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:29.441585 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.253590877s)
	W0804 10:05:29.441617 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:29.441700 2149628 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W0804 10:05:30.383305 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:32.383952 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:34.883276 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:36.883454 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:38.883897 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:41.383199 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:43.713667 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:43.766398 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:43.766528 2149628 out.go:270] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:05:43.769126 2149628 out.go:177] * Enabled addons: 
	I0804 10:05:43.770026 2149628 addons.go:514] duration metric: took 1m58.647363457s for enable addons: enabled=[]
	W0804 10:05:43.883892 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:46.384002 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:48.883850 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:51.384023 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:53.883929 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:55.884024 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:58.383443 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:00.883216 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:03.383100 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:05.383181 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:07.384103 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:09.883971 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:11.884134 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:14.383873 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:16.384007 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:28.885498 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	W0804 10:06:38.886603 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	W0804 10:06:41.383817 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:43.384045 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:45.883271 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:47.883345 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:49.883713 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:52.383197 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:54.383379 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:56.383880 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:58.883846 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:01.383265 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:03.883186 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:05.883315 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:07.884030 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:10.383933 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:12.883177 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:15.383259 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:17.383386 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:19.383988 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:21.883344 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:23.883950 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:26.383273 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:28.383629 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:30.384083 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:32.883295 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:34.883588 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:37.383240 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:39.383490 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:41.384132 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:43.883489 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:45.884049 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:48.383460 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:50.384087 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:52.883308 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:54.883712 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:56.883778 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:59.383176 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:01.383269 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:12.885483 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	W0804 10:08:22.886436 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	W0804 10:08:25.383211 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:27.383261 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:29.883318 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:31.883721 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:34.383160 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:36.383928 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:38.883516 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:40.883920 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:42.884060 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:45.384074 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:47.883235 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:50.383116 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:52.383162 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:54.383410 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:56.383810 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:58.883290 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:00.883650 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:03.383190 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:05.383617 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:07.384051 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:09.883346 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:11.883783 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:13.884208 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:16.383435 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:18.383891 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:20.883429 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:22.884027 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:25.383556 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:27.883164 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:29.883548 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:31.883955 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:34.383514 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:36.883247 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:38.883512 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:40.884109 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:43.383400 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:09:45.383376 2149628 node_ready.go:38] duration metric: took 6m0.000813638s for node "no-preload-499486" to be "Ready" ...
	I0804 10:09:45.385759 2149628 out.go:201] 
	W0804 10:09:45.386973 2149628 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W0804 10:09:45.386995 2149628 out.go:270] * 
	* 
	W0804 10:09:45.389624 2149628 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 10:09:45.390891 2149628 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-499486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-499486
helpers_test.go:235: (dbg) docker inspect no-preload-499486:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a",
	        "Created": "2025-08-04T09:53:15.660442354Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2149831,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T10:03:35.921334492Z",
	            "FinishedAt": "2025-08-04T10:03:34.718097407Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/hostname",
	        "HostsPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/hosts",
	        "LogPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a-json.log",
	        "Name": "/no-preload-499486",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-499486:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-499486",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a",
	                "LowerDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-499486",
	                "Source": "/var/lib/docker/volumes/no-preload-499486/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-499486",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-499486",
	                "name.minikube.sigs.k8s.io": "no-preload-499486",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fac6055cf947ab02c491cbb5dd64cbf3c0ae98a2e42975ad1d99b1bdbe7a9bbd",
	            "SandboxKey": "/var/run/docker/netns/fac6055cf947",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-499486": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:00:36:b7:69:43",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b62d1a98319626f2ebd728777c7c3c44586a7c69bc74cc1eeb93ee4ca2df5d38",
	                    "EndpointID": "cd2f2866ae03228d2f1c745367746ee5866c33aa7baf64438d9f50fae785c9c7",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-499486",
	                        "cdcf9a40640c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499486 -n no-preload-499486
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499486 -n no-preload-499486: exit status 2 (274.868223ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-499486 logs -n 25
E0804 10:09:46.118082 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubenet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                          ARGS                                                                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ ssh     │ -p kubenet-561540 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                     │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo docker system info                                                                                                                                                                                                              │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                        │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                  │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cri-dockerd --version                                                                                                                                                                                                           │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat containerd --no-pager                                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                      │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /etc/containerd/config.toml                                                                                                                                                                                                 │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo containerd config dump                                                                                                                                                                                                          │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                   │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │                     │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat crio --no-pager                                                                                                                                                                                                   │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                         │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo crio config                                                                                                                                                                                                                     │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ delete  │ -p kubenet-561540                                                                                                                                                                                                                                      │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ stop    │ -p newest-cni-768931 --alsologtostderr -v=3                                                                                                                                                                                                            │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ addons  │ enable dashboard -p newest-cni-768931 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                           │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ start   │ -p newest-cni-768931 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0 │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │                     │
	│ image   │ newest-cni-768931 image list --format=json                                                                                                                                                                                                             │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:08 UTC │ 04 Aug 25 10:08 UTC │
	│ pause   │ -p newest-cni-768931 --alsologtostderr -v=1                                                                                                                                                                                                            │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:08 UTC │ 04 Aug 25 10:08 UTC │
	│ unpause │ -p newest-cni-768931 --alsologtostderr -v=1                                                                                                                                                                                                            │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:08 UTC │ 04 Aug 25 10:08 UTC │
	│ delete  │ -p newest-cni-768931                                                                                                                                                                                                                                   │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:09 UTC │ 04 Aug 25 10:09 UTC │
	│ delete  │ -p newest-cni-768931                                                                                                                                                                                                                                   │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:09 UTC │ 04 Aug 25 10:09 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 10:04:32
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 10:04:32.687485 2163332 out.go:345] Setting OutFile to fd 1 ...
	I0804 10:04:32.687601 2163332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 10:04:32.687610 2163332 out.go:358] Setting ErrFile to fd 2...
	I0804 10:04:32.687614 2163332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 10:04:32.687787 2163332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 10:04:32.688302 2163332 out.go:352] Setting JSON to false
	I0804 10:04:32.689384 2163332 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":153962,"bootTime":1754147911,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 10:04:32.689473 2163332 start.go:140] virtualization: kvm guest
	I0804 10:04:32.691276 2163332 out.go:177] * [newest-cni-768931] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 10:04:32.692852 2163332 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 10:04:32.692888 2163332 notify.go:220] Checking for updates...
	I0804 10:04:32.695015 2163332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 10:04:32.696142 2163332 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:32.697215 2163332 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 10:04:32.698321 2163332 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 10:04:32.699270 2163332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 10:04:32.700616 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:32.701052 2163332 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 10:04:32.723805 2163332 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 10:04:32.723883 2163332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 10:04:32.778232 2163332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 10:04:32.768372933 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 10:04:32.778341 2163332 docker.go:318] overlay module found
	I0804 10:04:32.779801 2163332 out.go:177] * Using the docker driver based on existing profile
	I0804 10:04:32.780788 2163332 start.go:304] selected driver: docker
	I0804 10:04:32.780822 2163332 start.go:918] validating driver "docker" against &{Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:32.780895 2163332 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 10:04:32.781839 2163332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 10:04:32.827839 2163332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 10:04:32.819484271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 10:04:32.828202 2163332 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0804 10:04:32.828229 2163332 cni.go:84] Creating CNI manager for ""
	I0804 10:04:32.828284 2163332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 10:04:32.828323 2163332 start.go:348] cluster config:
	{Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:32.830455 2163332 out.go:177] * Starting "newest-cni-768931" primary control-plane node in "newest-cni-768931" cluster
	I0804 10:04:32.831301 2163332 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 10:04:32.832264 2163332 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 10:04:32.833160 2163332 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 10:04:32.833198 2163332 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 10:04:32.833213 2163332 cache.go:56] Caching tarball of preloaded images
	I0804 10:04:32.833291 2163332 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 10:04:32.833335 2163332 preload.go:172] Found /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 10:04:32.833346 2163332 cache.go:59] Finished verifying existence of preloaded tar for v1.34.0-beta.0 on docker
	I0804 10:04:32.833466 2163332 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/config.json ...
	I0804 10:04:32.853043 2163332 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 10:04:32.853066 2163332 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 10:04:32.853089 2163332 cache.go:230] Successfully downloaded all kic artifacts
	I0804 10:04:32.853130 2163332 start.go:360] acquireMachinesLock for newest-cni-768931: {Name:mk60747b86b31a8b440009760f939cd98b70b1b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 10:04:32.853200 2163332 start.go:364] duration metric: took 46.728µs to acquireMachinesLock for "newest-cni-768931"
	I0804 10:04:32.853224 2163332 start.go:96] Skipping create...Using existing machine configuration
	I0804 10:04:32.853234 2163332 fix.go:54] fixHost starting: 
	I0804 10:04:32.853483 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:32.870192 2163332 fix.go:112] recreateIfNeeded on newest-cni-768931: state=Stopped err=<nil>
	W0804 10:04:32.870218 2163332 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 10:04:32.871722 2163332 out.go:177] * Restarting existing docker container for "newest-cni-768931" ...
	W0804 10:04:33.885027 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:04:32.872698 2163332 cli_runner.go:164] Run: docker start newest-cni-768931
	I0804 10:04:33.099718 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:33.118449 2163332 kic.go:430] container "newest-cni-768931" state is running.
	I0804 10:04:33.118905 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:33.137343 2163332 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/config.json ...
	I0804 10:04:33.137542 2163332 machine.go:93] provisionDockerMachine start ...
	I0804 10:04:33.137597 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:33.155160 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:33.155419 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:33.155437 2163332 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 10:04:33.156072 2163332 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58734->127.0.0.1:33169: read: connection reset by peer
	I0804 10:04:36.284896 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-768931
	
	I0804 10:04:36.284952 2163332 ubuntu.go:169] provisioning hostname "newest-cni-768931"
	I0804 10:04:36.285030 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.302808 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.303033 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.303047 2163332 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-768931 && echo "newest-cni-768931" | sudo tee /etc/hostname
	I0804 10:04:36.436070 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-768931
	
	I0804 10:04:36.436155 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.453360 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.453580 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.453597 2163332 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-768931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-768931/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-768931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 10:04:36.577177 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 10:04:36.577204 2163332 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 10:04:36.577269 2163332 ubuntu.go:177] setting up certificates
	I0804 10:04:36.577284 2163332 provision.go:84] configureAuth start
	I0804 10:04:36.577338 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:36.594945 2163332 provision.go:143] copyHostCerts
	I0804 10:04:36.595024 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 10:04:36.595052 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 10:04:36.595122 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 10:04:36.595229 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 10:04:36.595240 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 10:04:36.595279 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 10:04:36.595353 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 10:04:36.595363 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 10:04:36.595397 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 10:04:36.595465 2163332 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.newest-cni-768931 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-768931]
	I0804 10:04:36.675231 2163332 provision.go:177] copyRemoteCerts
	I0804 10:04:36.675299 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 10:04:36.675408 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.693281 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:36.786243 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 10:04:36.808201 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 10:04:36.829564 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 10:04:36.851320 2163332 provision.go:87] duration metric: took 274.022098ms to configureAuth
	I0804 10:04:36.851348 2163332 ubuntu.go:193] setting minikube options for container-runtime
	I0804 10:04:36.851551 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:36.851596 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.868506 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.868714 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.868725 2163332 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 10:04:36.993642 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 10:04:36.993669 2163332 ubuntu.go:71] root file system type: overlay
	I0804 10:04:36.993814 2163332 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 10:04:36.993894 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.011512 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:37.011804 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:37.011909 2163332 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 10:04:37.144143 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 10:04:37.144254 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.163872 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:37.164133 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:37.164159 2163332 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 10:04:37.294409 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 10:04:37.294438 2163332 machine.go:96] duration metric: took 4.156880869s to provisionDockerMachine
	I0804 10:04:37.294451 2163332 start.go:293] postStartSetup for "newest-cni-768931" (driver="docker")
	I0804 10:04:37.294467 2163332 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 10:04:37.294538 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 10:04:37.294594 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.312083 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.402431 2163332 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 10:04:37.405677 2163332 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 10:04:37.405711 2163332 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 10:04:37.405722 2163332 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 10:04:37.405732 2163332 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 10:04:37.405748 2163332 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 10:04:37.405809 2163332 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 10:04:37.405901 2163332 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 10:04:37.406013 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 10:04:37.414129 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 10:04:37.436137 2163332 start.go:296] duration metric: took 141.67054ms for postStartSetup
	I0804 10:04:37.436224 2163332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 10:04:37.436265 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.453687 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.541885 2163332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 10:04:37.546057 2163332 fix.go:56] duration metric: took 4.692814355s for fixHost
	I0804 10:04:37.546084 2163332 start.go:83] releasing machines lock for "newest-cni-768931", held for 4.692869693s
	I0804 10:04:37.546159 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:37.563070 2163332 ssh_runner.go:195] Run: cat /version.json
	I0804 10:04:37.563126 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.563138 2163332 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 10:04:37.563203 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.580936 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.581156 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.740866 2163332 ssh_runner.go:195] Run: systemctl --version
	I0804 10:04:37.745223 2163332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 10:04:37.749326 2163332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 10:04:37.766095 2163332 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 10:04:37.766176 2163332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 10:04:37.773788 2163332 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 10:04:37.773820 2163332 start.go:495] detecting cgroup driver to use...
	I0804 10:04:37.773849 2163332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 10:04:37.773948 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 10:04:37.788117 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:38.201785 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 10:04:38.211955 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 10:04:38.221176 2163332 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 10:04:38.221223 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 10:04:38.230298 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 10:04:38.238908 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 10:04:38.247614 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 10:04:38.256328 2163332 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 10:04:38.264446 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 10:04:38.273173 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 10:04:38.282132 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 10:04:38.290867 2163332 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 10:04:38.298323 2163332 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 10:04:38.305902 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:38.392109 2163332 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 10:04:38.481905 2163332 start.go:495] detecting cgroup driver to use...
	I0804 10:04:38.481959 2163332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 10:04:38.482006 2163332 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 10:04:38.492886 2163332 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 10:04:38.492964 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 10:04:38.507193 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 10:04:38.524383 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:38.965725 2163332 ssh_runner.go:195] Run: which cri-dockerd
	I0804 10:04:38.969614 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 10:04:38.977908 2163332 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 10:04:38.993935 2163332 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 10:04:39.070708 2163332 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 10:04:39.151070 2163332 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 10:04:39.151179 2163332 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 10:04:39.167734 2163332 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 10:04:39.179347 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.254327 2163332 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 10:04:39.556127 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 10:04:39.566948 2163332 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0804 10:04:39.577711 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 10:04:39.587256 2163332 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 10:04:39.666843 2163332 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 10:04:39.760652 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.840823 2163332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 10:04:39.853363 2163332 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 10:04:39.863091 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.939093 2163332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 10:04:39.998099 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 10:04:40.009070 2163332 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 10:04:40.009141 2163332 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 10:04:40.012496 2163332 start.go:563] Will wait 60s for crictl version
	I0804 10:04:40.012547 2163332 ssh_runner.go:195] Run: which crictl
	I0804 10:04:40.015480 2163332 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 10:04:40.047607 2163332 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 10:04:40.047667 2163332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 10:04:40.071117 2163332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 10:04:40.096346 2163332 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 10:04:40.096430 2163332 cli_runner.go:164] Run: docker network inspect newest-cni-768931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 10:04:40.113799 2163332 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0804 10:04:40.117316 2163332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 10:04:40.128718 2163332 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0804 10:04:40.129838 2163332 kubeadm.go:875] updating cluster {Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 10:04:40.130050 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:40.510582 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:40.900777 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:41.302831 2163332 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 10:04:41.303034 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:41.705389 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:42.114511 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:42.516831 2163332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 10:04:42.537600 2163332 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 10:04:42.537629 2163332 docker.go:633] Images already preloaded, skipping extraction
	I0804 10:04:42.537693 2163332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 10:04:42.556805 2163332 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 10:04:42.556830 2163332 cache_images.go:85] Images are preloaded, skipping loading
	I0804 10:04:42.556843 2163332 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0-beta.0 docker true true} ...
	I0804 10:04:42.556981 2163332 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-768931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 10:04:42.557048 2163332 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 10:04:42.603960 2163332 cni.go:84] Creating CNI manager for ""
	I0804 10:04:42.603991 2163332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 10:04:42.604000 2163332 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0804 10:04:42.604024 2163332 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-768931 NodeName:newest-cni-768931 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 10:04:42.604182 2163332 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-768931"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 10:04:42.604258 2163332 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 10:04:42.612607 2163332 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 10:04:42.612659 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 10:04:42.620777 2163332 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0804 10:04:42.637111 2163332 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 10:04:42.652929 2163332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2300 bytes)
	I0804 10:04:42.669016 2163332 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0804 10:04:42.672189 2163332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 10:04:42.681993 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:42.752820 2163332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 10:04:42.766032 2163332 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931 for IP: 192.168.76.2
	I0804 10:04:42.766057 2163332 certs.go:194] generating shared ca certs ...
	I0804 10:04:42.766079 2163332 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:42.766266 2163332 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 10:04:42.766336 2163332 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 10:04:42.766352 2163332 certs.go:256] generating profile certs ...
	I0804 10:04:42.766461 2163332 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/client.key
	I0804 10:04:42.766532 2163332 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key.a5c16e02
	I0804 10:04:42.766586 2163332 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.key
	I0804 10:04:42.766711 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 10:04:42.766752 2163332 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 10:04:42.766766 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 10:04:42.766803 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 10:04:42.766837 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 10:04:42.766912 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 10:04:42.766983 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 10:04:42.767635 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 10:04:42.790829 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 10:04:42.814436 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 10:04:42.873985 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 10:04:42.962257 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 10:04:42.987204 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 10:04:43.010504 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 10:04:43.032579 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 10:04:43.054052 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 10:04:43.074805 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 10:04:43.095457 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 10:04:43.116289 2163332 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 10:04:43.132026 2163332 ssh_runner.go:195] Run: openssl version
	I0804 10:04:43.137020 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 10:04:43.145170 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.148316 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.148363 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.154461 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 10:04:43.162454 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 10:04:43.170868 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.174158 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.174205 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.180335 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 10:04:43.188046 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 10:04:43.196142 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.199374 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.199418 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.205534 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 10:04:43.213018 2163332 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 10:04:43.215961 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 10:04:43.221714 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 10:04:43.227380 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 10:04:43.233506 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 10:04:43.239207 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 10:04:43.245036 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 10:04:43.250834 2163332 kubeadm.go:392] StartCluster: {Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:43.250956 2163332 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 10:04:43.269121 2163332 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 10:04:43.277263 2163332 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 10:04:43.277283 2163332 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0804 10:04:43.277330 2163332 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 10:04:43.285660 2163332 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 10:04:43.286263 2163332 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-768931" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:43.286552 2163332 kubeconfig.go:62] /home/jenkins/minikube-integration/21223-1578987/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-768931" cluster setting kubeconfig missing "newest-cni-768931" context setting]
	I0804 10:04:43.286984 2163332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.288423 2163332 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 10:04:43.298821 2163332 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0804 10:04:43.298859 2163332 kubeadm.go:593] duration metric: took 21.569333ms to restartPrimaryControlPlane
	I0804 10:04:43.298870 2163332 kubeadm.go:394] duration metric: took 48.062594ms to StartCluster
	I0804 10:04:43.298890 2163332 settings.go:142] acquiring lock: {Name:mk3d97f9903fe59355ed92bb92489c9b9834574a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.298958 2163332 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:43.300110 2163332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.300900 2163332 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 10:04:43.300973 2163332 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 10:04:43.301073 2163332 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-768931"
	I0804 10:04:43.301106 2163332 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-768931"
	I0804 10:04:43.301136 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:43.301159 2163332 addons.go:69] Setting dashboard=true in profile "newest-cni-768931"
	I0804 10:04:43.301172 2163332 addons.go:238] Setting addon dashboard=true in "newest-cni-768931"
	W0804 10:04:43.301179 2163332 addons.go:247] addon dashboard should already be in state true
	I0804 10:04:43.301151 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.301204 2163332 addons.go:69] Setting default-storageclass=true in profile "newest-cni-768931"
	I0804 10:04:43.301216 2163332 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-768931"
	I0804 10:04:43.301196 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.301557 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.301866 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.302384 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.303179 2163332 out.go:177] * Verifying Kubernetes components...
	I0804 10:04:43.305197 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:43.324564 2163332 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 10:04:43.325432 2163332 addons.go:238] Setting addon default-storageclass=true in "newest-cni-768931"
	I0804 10:04:43.325477 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.325866 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.326227 2163332 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:43.326249 2163332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 10:04:43.326263 2163332 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0804 10:04:43.326303 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.330702 2163332 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	W0804 10:04:43.886614 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:04:43.332193 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0804 10:04:43.332226 2163332 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0804 10:04:43.332289 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.352412 2163332 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:43.352439 2163332 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 10:04:43.352511 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.354098 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.357876 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.376872 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.566637 2163332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 10:04:43.579924 2163332 api_server.go:52] waiting for apiserver process to appear ...
	I0804 10:04:43.580007 2163332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 10:04:43.587036 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:43.661862 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:43.763049 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0804 10:04:43.763163 2163332 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0804 10:04:43.788243 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0804 10:04:43.788319 2163332 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W0804 10:04:43.865293 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.865365 2163332 retry.go:31] will retry after 305.419917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.872538 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0804 10:04:43.872570 2163332 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0804 10:04:43.875393 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.875428 2163332 retry.go:31] will retry after 145.860796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.893731 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0804 10:04:43.893755 2163332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0804 10:04:43.974563 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0804 10:04:43.974597 2163332 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0804 10:04:44.022021 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:44.068260 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0804 10:04:44.068309 2163332 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0804 10:04:44.080910 2163332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 10:04:44.164887 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0804 10:04:44.164970 2163332 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0804 10:04:44.171091 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:44.277704 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0804 10:04:44.277741 2163332 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0804 10:04:44.368026 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:44.368071 2163332 retry.go:31] will retry after 204.750775ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:44.368122 2163332 api_server.go:72] duration metric: took 1.067187806s to wait for apiserver process to appear ...
	I0804 10:04:44.368138 2163332 api_server.go:88] waiting for apiserver healthz status ...
	I0804 10:04:44.368158 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:44.368545 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:04:44.383288 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:04:44.383317 2163332 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0804 10:04:44.480138 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:04:44.573381 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:44.869120 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:45.817807 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (21.02485888s)
	W0804 10:04:45.817865 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47830->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817882 2149628 retry.go:31] will retry after 7.331884675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47830->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817886 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (18.577242103s)
	W0804 10:04:45.817921 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47842->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817941 2149628 retry.go:31] will retry after 8.626487085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47842->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.819147 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (15.673641591s)
	W0804 10:04:45.819203 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.819221 2149628 retry.go:31] will retry after 10.775617277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:46.383837 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:04:48.883614 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:49.869344 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:49.869418 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:04:51.383255 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:53.150556 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:04:53.202901 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:53.202938 2149628 retry.go:31] will retry after 10.556999875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:53.383788 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:54.445142 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:04:54.496071 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:54.496106 2149628 retry.go:31] will retry after 19.784775984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:55.384040 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:54.871144 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:54.871202 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:56.595610 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:04:56.648210 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:56.648246 2149628 retry.go:31] will retry after 19.28607151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:57.883186 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:04:59.883484 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:59.871849 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:59.871895 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:05:02.383555 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:03.761004 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:03.814105 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:03.814138 2149628 retry.go:31] will retry after 18.372442886s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:04.883286 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:04.478042 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (20.306910761s)
	W0804 10:05:04.478091 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.478126 2163332 retry.go:31] will retry after 410.995492ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.672813 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (20.192633915s)
	W0804 10:05:04.672867 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.672888 2163332 retry.go:31] will retry after 182.584114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.703068 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (20.129638597s)
	W0804 10:05:04.703115 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.703134 2163332 retry.go:31] will retry after 523.614331ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.856484 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:04.872959 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:04.873004 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:04.889864 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:05.192954 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:37594->192.168.76.2:8443: read: connection reset by peer
	I0804 10:05:05.227229 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:05:05.369063 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:05.369560 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:05.868214 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:05.868705 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:06.201020 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.344463633s)
	W0804 10:05:06.201082 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201113 2163332 retry.go:31] will retry after 482.284125ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201118 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.311218695s)
	W0804 10:05:06.201165 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:06.201186 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201211 2163332 retry.go:31] will retry after 887.479058ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201194 2163332 retry.go:31] will retry after 435.691438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.368292 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:06.368825 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:06.637302 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:06.683768 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:06.697149 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.697200 2163332 retry.go:31] will retry after 912.303037ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:06.737524 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.737566 2163332 retry.go:31] will retry after 625.926598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.868554 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:06.869018 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:07.089442 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:07.144156 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.144195 2163332 retry.go:31] will retry after 785.129731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.364509 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:07.368843 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:07.369217 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:07.420384 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.420426 2163332 retry.go:31] will retry after 1.204230636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.610548 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:07.663536 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.663566 2163332 retry.go:31] will retry after 847.493782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:07.384053 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:07.868944 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:07.869396 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:07.929533 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:07.992350 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.992381 2163332 retry.go:31] will retry after 1.598370768s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.368829 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:08.369322 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:08.511490 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:08.563819 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.563859 2163332 retry.go:31] will retry after 2.394822068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.625020 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:08.680531 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.680572 2163332 retry.go:31] will retry after 1.418436203s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.868633 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:08.869103 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:09.368624 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:09.369142 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:09.591529 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:09.645331 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:09.645367 2163332 retry.go:31] will retry after 3.361261664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:09.868611 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:09.869088 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.099510 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:10.154439 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:10.154474 2163332 retry.go:31] will retry after 1.332951383s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:10.368786 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:10.369300 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.869015 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:10.869515 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.959750 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:11.011704 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.011736 2163332 retry.go:31] will retry after 3.283196074s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.369218 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:11.369738 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:11.487993 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:11.543582 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.543631 2163332 retry.go:31] will retry after 1.836854478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.869009 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:11.869527 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:12.369134 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:12.369608 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.284114 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:05:12.868285 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:12.868757 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:13.007033 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:13.060825 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.060859 2163332 retry.go:31] will retry after 5.419314165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.368273 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:13.368846 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:13.381071 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:13.436653 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.436740 2163332 retry.go:31] will retry after 4.903205255s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.869165 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:13.869693 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.295170 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:14.348620 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:14.348654 2163332 retry.go:31] will retry after 3.265872015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:14.368685 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:14.369071 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.868586 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:14.869001 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:15.368516 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:15.368980 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:15.868561 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:15.869023 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:16.368523 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:16.368989 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:16.868494 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:16.868945 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:17.368464 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:17.368952 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:17.615361 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:17.669075 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:17.669112 2163332 retry.go:31] will retry after 4.169004534s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:15.935132 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:17.885492 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:05:17.868530 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:17.869032 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:18.340601 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:18.368999 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:18.369438 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:18.395142 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.395177 2163332 retry.go:31] will retry after 4.503631797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.480301 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:18.532269 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.532303 2163332 retry.go:31] will retry after 6.221358918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.868632 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:18.869050 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:19.368539 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:19.369007 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:19.868600 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:19.869064 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:20.368560 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:20.369023 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:20.868636 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:20.869103 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:21.368674 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:21.369151 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:21.838756 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:21.869088 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:21.869590 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:21.892280 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:21.892309 2163332 retry.go:31] will retry after 7.287119503s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:22.368833 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:22.369350 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:22.187953 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:22.869045 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:22.869518 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:22.899745 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:22.973354 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:22.973440 2163332 retry.go:31] will retry after 5.491383729s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:23.368948 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:24.754708 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:27.887543 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:05:29.439408 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (15.15524051s)
	W0804 10:05:29.439455 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45456->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:29.439566 2149628 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45456->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:05:29.441507 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (13.506331682s)
	W0804 10:05:29.441560 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:29.441583 2149628 retry.go:31] will retry after 14.271169565s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:29.441585 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.253590877s)
	W0804 10:05:29.441617 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:29.441700 2149628 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W0804 10:05:30.383305 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:28.370244 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:28.370296 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:28.465977 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:29.179675 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:32.383952 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:34.883276 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:33.371314 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:33.371380 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:05:36.883454 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:38.883897 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:38.372462 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:38.372528 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:05:41.383199 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:43.713667 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:43.766398 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:43.766528 2149628 out.go:270] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:05:43.769126 2149628 out.go:177] * Enabled addons: 
	I0804 10:05:43.770026 2149628 addons.go:514] duration metric: took 1m58.647363457s for enable addons: enabled=[]
	W0804 10:05:43.883892 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:43.373289 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:43.373454 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:44.936710 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (20.181960154s)
	W0804 10:05:44.936754 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52098->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.936774 2163332 retry.go:31] will retry after 12.603121969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52098->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939850 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (16.473803888s)
	I0804 10:05:44.939875 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (15.760161568s)
	W0804 10:05:44.939908 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52114->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:44.939909 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939927 2163332 ssh_runner.go:235] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (1.566452819s)
	I0804 10:05:44.939927 2163332 retry.go:31] will retry after 11.974707637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52114->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939942 2163332 retry.go:31] will retry after 10.364414585s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939952 2163332 logs.go:282] 2 containers: [649f5e5c295c 059756d38779]
	I0804 10:05:44.940008 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:44.959696 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:44.959763 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:44.981336 2163332 logs.go:282] 0 containers: []
	W0804 10:05:44.981364 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:44.981422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:45.001103 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:45.001170 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:45.019261 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.019295 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:45.019341 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:45.037700 2163332 logs.go:282] 2 containers: [69f71bfef17b e3a6308944b3]
	I0804 10:05:45.037776 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:45.055759 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.055792 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:45.055847 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:45.073894 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.073922 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:45.073935 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:45.073949 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:45.129417 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:45.122097    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.122637    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124224    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124675    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.126118    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:45.122097    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.122637    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124224    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124675    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.126118    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:45.129437 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:45.129450 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:45.156907 2163332 logs.go:123] Gathering logs for kube-apiserver [059756d38779] ...
	I0804 10:05:45.156940 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059756d38779"
	W0804 10:05:45.175729 2163332 logs.go:130] failed kube-apiserver [059756d38779]: command: /bin/bash -c "docker logs --tail 400 059756d38779" /bin/bash -c "docker logs --tail 400 059756d38779": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 059756d38779
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 059756d38779
	
	** /stderr **
	I0804 10:05:45.175748 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:45.175765 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:45.195944 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:45.195970 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:45.215671 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:45.215703 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:45.256918 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:45.256951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:45.283079 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:45.283122 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:45.318677 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:45.318712 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:45.370577 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:45.370621 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:45.391591 2163332 logs.go:123] Gathering logs for kube-controller-manager [e3a6308944b3] ...
	I0804 10:05:45.391616 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a6308944b3"
	I0804 10:05:45.412276 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:45.412300 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 10:05:46.384002 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:48.883850 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:47.962390 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:47.962840 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:47.962936 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:47.981464 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:47.981534 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:47.999231 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:47.999296 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:48.017739 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.017764 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:48.017806 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:48.036069 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:48.036151 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:48.053625 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.053651 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:48.053706 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:48.072069 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:48.072161 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:48.089963 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.089985 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:48.090033 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:48.107912 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.107934 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:48.107956 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:48.107972 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:48.164032 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:48.156591    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.157104    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.158718    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.159117    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.160609    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:48.156591    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.157104    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.158718    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.159117    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.160609    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:48.164052 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:48.164068 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:48.189481 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:48.189509 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:48.223302 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:48.223340 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:48.243043 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:48.243072 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:48.279568 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:48.279605 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:48.305730 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:48.305759 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:48.326737 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:48.326763 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:48.376057 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:48.376092 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:48.397266 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:48.397297 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:50.949382 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:50.949902 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:50.950009 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:50.969779 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:50.969854 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:50.988509 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:50.988586 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:51.006536 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.006565 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:51.006613 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:51.024853 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:51.024921 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:51.042617 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.042645 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:51.042689 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:51.060511 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:51.060599 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:51.079005 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.079031 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:51.079092 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:51.096451 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.096474 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:51.096489 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:51.096500 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:51.152017 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:51.152057 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:51.202478 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:51.202527 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:51.224042 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:51.224069 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:51.244633 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:51.244664 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:51.263948 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:51.263981 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:51.300099 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:51.300130 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:51.327538 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:51.327568 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:51.383029 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:51.375959    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.376437    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.377941    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.378408    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.379910    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:51.375959    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.376437    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.377941    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.378408    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.379910    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:51.383051 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:51.383067 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:51.408284 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:51.408314 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	W0804 10:05:51.384023 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:53.883929 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:53.941653 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:53.942148 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:53.942243 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:53.961471 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:53.961551 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:53.979438 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:53.979526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:53.997538 2163332 logs.go:282] 0 containers: []
	W0804 10:05:53.997559 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:53.997604 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:54.016326 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:54.016411 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:54.033583 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.033612 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:54.033663 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:54.051020 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:54.051103 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:54.068091 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.068118 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:54.068166 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:54.085797 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.085822 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:54.085842 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:54.085855 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:54.111832 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:54.111861 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:54.137672 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:54.137701 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:54.158028 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:54.158058 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:54.212546 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:54.212579 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:54.231855 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:54.231886 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:54.282575 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:54.282614 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:54.338570 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:54.331379    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.331842    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333378    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333781    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.335263    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:54.331379    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.331842    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333378    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333781    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.335263    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:54.338591 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:54.338604 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:54.373298 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:54.373329 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:54.393825 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:54.393848 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:55.304830 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:55.358381 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:55.358414 2163332 retry.go:31] will retry after 25.619477771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.915875 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:56.931223 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:56.931695 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:56.931788 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W0804 10:05:56.971520 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.971555 2163332 retry.go:31] will retry after 22.721182959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.971565 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:56.971637 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:56.989778 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:56.989869 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:57.007294 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.007316 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:57.007359 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:57.024882 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:57.024964 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:57.042858 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.042881 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:57.042935 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:57.061232 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:57.061331 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:57.078841 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.078870 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:57.078919 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:57.096724 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.096754 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:57.096778 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:57.096790 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:57.150588 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:57.150621 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:57.176804 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:57.176833 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:57.233732 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:57.225639    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.226657    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228215    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228620    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.230079    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:57.225639    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.226657    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228215    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228620    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.230079    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:57.233755 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:57.233768 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:57.270073 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:57.270109 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:57.290426 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:57.290461 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:57.327258 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:57.327286 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:57.353115 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:57.353143 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:57.373360 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:57.373392 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:57.423101 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:57.423133 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:57.540679 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:57.593367 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:57.593411 2163332 retry.go:31] will retry after 18.437511284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:55.884024 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:58.383443 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:59.945876 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:59.946354 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:59.946446 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:59.966005 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:59.966091 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:59.985617 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:59.985701 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:00.004828 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.004855 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:00.004906 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:00.023587 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:00.023651 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:00.041659 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.041680 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:00.041727 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:00.059493 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:00.059562 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:00.076712 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.076736 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:00.076779 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:00.095203 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.095222 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:00.095237 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:00.095248 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:00.113747 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:00.113775 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:00.150407 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:00.150433 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:00.202445 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:00.202486 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:00.229719 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:00.229755 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:00.255849 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:00.255878 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:00.276091 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:00.276119 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:00.297957 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:00.297986 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:00.353933 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:00.346687    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.347273    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.348805    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.349306    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.350820    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:00.346687    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.347273    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.348805    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.349306    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.350820    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:00.353953 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:00.353968 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:00.390814 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:00.390846 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	W0804 10:06:00.883216 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:03.383100 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:05.383181 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:02.945900 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:02.946356 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:02.946453 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:02.965471 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:06:02.965535 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:02.983934 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:06:02.984001 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:03.002213 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.002237 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:03.002285 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:03.021772 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:03.021856 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:03.039529 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.039554 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:03.039612 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:03.057939 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:03.058004 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:03.076289 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.076310 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:03.076355 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:03.094117 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.094146 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:03.094167 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:03.094182 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:03.130756 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:03.130783 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:03.187120 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:03.179355    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.179917    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181530    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181944    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.183460    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:03.179355    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.179917    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181530    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181944    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.183460    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:03.187140 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:03.187153 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:03.207770 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:03.207804 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:03.244606 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:03.244642 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:03.295650 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:03.295686 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:03.351809 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:03.351844 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:03.379889 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:03.379922 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:03.406739 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:03.406767 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:03.427941 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:03.427967 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:05.948009 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:05.948483 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:05.948575 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:05.967373 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:06:05.967442 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:05.985899 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:06:05.985979 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:06.004170 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.004194 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:06.004250 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:06.022314 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:06.022386 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:06.039940 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.039963 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:06.040005 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:06.058068 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:06.058144 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:06.076569 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.076591 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:06.076631 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:06.094127 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.094153 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:06.094179 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:06.094193 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:06.119164 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:06.119195 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:06.140482 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:06.140517 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:06.190516 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:06.190551 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:06.212353 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:06.212385 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:06.248893 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:06.248919 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:06.302627 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:06.302664 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:06.329602 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:06.329633 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:06.385087 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:06.377651    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.378359    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.379718    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.380186    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.381710    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:06.377651    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.378359    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.379718    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.380186    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.381710    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:06.385113 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:06.385131 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:06.421810 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:06.421843 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:06:07.384103 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:09.883971 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:08.941210 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:06:11.884134 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:14.383873 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:13.941780 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:06:13.941906 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:13.960880 2163332 logs.go:282] 2 containers: [806e7ebaaed1 649f5e5c295c]
	I0804 10:06:13.960962 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:13.979358 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:13.979441 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:13.996946 2163332 logs.go:282] 0 containers: []
	W0804 10:06:13.996972 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:13.997025 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:14.015595 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:14.015668 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:14.034223 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.034246 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:14.034288 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:14.052124 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:14.052200 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:14.069965 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.069989 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:14.070032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:14.088436 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.088459 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:14.088473 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:14.088503 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:14.146648 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:14.146701 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:14.173008 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:14.173051 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 10:06:16.031588 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:06:16.384007 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:19.693397 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:06:20.978525 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:06:28.857368 2163332 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (14.684287631s)
	W0804 10:06:28.857442 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:24.221601    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:06:28.850442    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49502->[::1]:8443: read: connection reset by peer"
	E0804 10:06:28.851023    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.852675    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.853078    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:24.221601    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:06:28.850442    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49502->[::1]:8443: read: connection reset by peer"
	E0804 10:06:28.851023    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.852675    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.853078    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:28.857455 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:28.857466 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:28.857477 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.825848081s)
	W0804 10:06:28.857515 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49512->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:06:28.857580 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.164140796s)
	W0804 10:06:28.857620 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:06:28.857662 2163332 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49512->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W0804 10:06:28.857709 2163332 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:06:28.857875 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.879306724s)
	W0804 10:06:28.857914 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:06:28.857989 2163332 out.go:270] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:06:28.860496 2163332 out.go:177] * Enabled addons: 
	W0804 10:06:28.885498 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:06:28.861918 2163332 addons.go:514] duration metric: took 1m45.560958591s for enable addons: enabled=[]
	I0804 10:06:28.878501 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:28.878527 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:28.917388 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:28.917421 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:28.938499 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:28.938540 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:28.979902 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:28.979935 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:29.005867 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:29.005903 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	W0804 10:06:29.025877 2163332 logs.go:130] failed kube-apiserver [649f5e5c295c]: command: /bin/bash -c "docker logs --tail 400 649f5e5c295c" /bin/bash -c "docker logs --tail 400 649f5e5c295c": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 649f5e5c295c
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 649f5e5c295c
	
	** /stderr **
	I0804 10:06:29.025904 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:29.025916 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:29.076718 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:29.076759 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:31.597358 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:31.597799 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:31.597939 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:31.617008 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:31.617067 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:31.635937 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:31.636004 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:31.654450 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.654474 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:31.654531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:31.673162 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:31.673288 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:31.690681 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.690706 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:31.690759 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:31.712018 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:31.712111 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:31.729547 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.729576 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:31.729625 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:31.747479 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.747501 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:31.747513 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:31.747525 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:31.773882 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:31.773913 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:31.828620 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:31.821229    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.821688    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823253    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823731    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.825214    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:31.821229    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.821688    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823253    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823731    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.825214    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:31.828641 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:31.828655 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:31.854157 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:31.854190 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:31.873980 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:31.874004 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:31.910304 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:31.910342 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:31.931218 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:31.931246 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:31.969061 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:31.969091 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:32.019399 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:32.019436 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:32.040462 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:32.040488 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:32.059511 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:32.059540 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:34.622382 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:34.622843 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:34.622941 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:34.642832 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:34.642895 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:34.660588 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:34.660660 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:34.678855 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.678878 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:34.678922 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:34.698191 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:34.698282 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:34.716571 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.716593 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:34.716636 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:34.735252 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:34.735339 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:34.755152 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.755181 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:34.755230 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:34.773441 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.773472 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:34.773488 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:34.773500 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:34.793528 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:34.793556 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:34.812435 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:34.812465 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:34.837875 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:34.837905 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:34.858757 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:34.858786 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:34.878587 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:34.878614 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:34.916360 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:34.916391 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:34.982416 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:34.982452 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:35.039762 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:35.031976    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.032521    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034096    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034545    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.036090    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:35.031976    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.032521    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034096    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034545    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.036090    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:35.039782 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:35.039796 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:35.066299 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:35.066330 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:35.104670 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:35.104700 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:37.656360 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:37.656872 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:37.656969 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:37.675825 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:37.675894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	W0804 10:06:38.886603 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:06:37.694962 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:37.695023 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:37.712658 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.712684 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:37.712735 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:37.730728 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:37.730800 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:37.748576 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.748598 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:37.748640 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:37.767923 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:37.768007 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:37.785275 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.785298 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:37.785347 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:37.801999 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.802024 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:37.802055 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:37.802067 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:37.839050 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:37.839076 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:37.907098 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:37.907134 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:37.962875 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:37.955444    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.955922    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957526    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957895    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.959476    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:37.955444    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.955922    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957526    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957895    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.959476    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:37.962896 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:37.962916 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:37.988976 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:37.989004 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:38.011096 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:38.011124 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:38.049631 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:38.049661 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:38.102092 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:38.102126 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:38.124479 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:38.124506 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:38.144973 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:38.145000 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:38.170919 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:38.170951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:40.690387 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:40.690843 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:40.690940 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:40.710160 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:40.710230 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:40.727856 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:40.727940 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:40.745578 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.745605 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:40.745648 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:40.763453 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:40.763516 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:40.781764 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.781788 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:40.781839 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:40.799938 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:40.800013 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:40.817161 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.817187 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:40.817260 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:40.835239 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.835260 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:40.835279 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:40.835293 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:40.855149 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:40.855177 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:40.922877 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:40.922913 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:40.978296 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:40.970913    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.971466    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973009    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973412    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.974964    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:40.970913    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.971466    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973009    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973412    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.974964    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:40.978318 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:40.978339 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:41.004175 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:41.004205 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:41.025025 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:41.025053 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:41.061373 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:41.061413 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:41.087250 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:41.087278 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:41.107920 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:41.107947 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:41.148907 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:41.148937 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	W0804 10:06:41.383817 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:43.384045 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:43.699853 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:43.700314 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:43.700416 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:43.719695 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:43.719771 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:43.738313 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:43.738403 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:43.756507 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.756531 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:43.756574 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:43.775263 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:43.775363 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:43.793071 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.793109 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:43.793177 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:43.811134 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:43.811231 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:43.828955 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.828978 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:43.829038 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:43.847773 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.847793 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:43.847819 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:43.847831 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:43.873624 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:43.873653 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:43.894310 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:43.894337 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:43.945563 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:43.945599 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:43.966435 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:43.966465 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:43.984864 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:43.984889 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:44.024156 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:44.024192 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:44.060624 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:44.060652 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:44.125956 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:44.125999 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:44.152471 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:44.152508 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:44.207960 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:44.200436    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.200919    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202422    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202839    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.204356    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:44.200436    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.200919    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202422    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202839    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.204356    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:46.709332 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:46.709781 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:46.709868 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:46.729464 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:46.729567 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:46.748548 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:46.748644 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:46.766962 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.766986 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:46.767041 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:46.786525 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:46.786603 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:46.804285 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.804311 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:46.804360 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:46.822116 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:46.822209 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:46.839501 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.839530 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:46.839575 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:46.856689 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.856711 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:46.856728 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:46.856739 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:46.895336 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:46.895370 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:46.946627 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:46.946659 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:46.967302 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:46.967329 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:46.985945 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:46.985972 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:47.022376 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:47.022405 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:47.077558 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:47.069979    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.070438    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072002    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072443    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.074016    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:47.069979    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.070438    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072002    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072443    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.074016    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:47.077593 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:47.077609 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:47.097426 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:47.097453 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:47.160540 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:47.160577 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:47.186584 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:47.186612 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:06:45.883271 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:47.883345 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:49.883713 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:49.713880 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:49.714344 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:49.714431 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:49.732944 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:49.733002 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:49.751052 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:49.751129 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:49.769185 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.769207 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:49.769272 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:49.787184 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:49.787250 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:49.804791 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.804809 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:49.804849 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:49.823604 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:49.823673 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:49.840745 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.840766 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:49.840809 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:49.857681 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.857709 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:49.857729 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:49.857743 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:49.908402 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:49.908439 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:49.930280 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:49.930305 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:49.950867 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:49.950895 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:50.018519 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:50.018562 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:50.044619 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:50.044647 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:50.100753 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:50.092922    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.093459    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095094    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095578    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.097081    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:50.092922    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.093459    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095094    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095578    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.097081    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:50.100777 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:50.100793 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:50.125943 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:50.125970 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:50.146091 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:50.146117 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:50.181714 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:50.181742 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	W0804 10:06:52.383197 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:54.383379 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:52.721516 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:52.721956 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:52.722053 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:52.741758 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:52.741819 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:52.760560 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:52.760637 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:52.778049 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.778071 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:52.778133 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:52.796442 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:52.796515 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:52.813403 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.813433 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:52.813486 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:52.831370 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:52.831443 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:52.850355 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.850377 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:52.850418 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:52.868304 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.868329 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:52.868348 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:52.868362 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:52.909679 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:52.909712 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:52.959826 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:52.959860 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:52.980766 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:52.980792 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:53.000093 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:53.000123 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:53.066024 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:53.066063 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:53.122172 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:53.114825    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.115397    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.116943    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.117412    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.118938    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:53.114825    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.115397    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.116943    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.117412    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.118938    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:53.122200 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:53.122218 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:53.158613 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:53.158651 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:53.184392 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:53.184422 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:53.209845 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:53.209873 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:55.732938 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:55.733375 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:55.733476 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:55.752276 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:55.752356 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:55.770674 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:55.770750 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:55.788757 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.788778 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:55.788823 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:55.806924 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:55.806986 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:55.824084 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.824105 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:55.824163 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:55.842106 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:55.842195 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:55.859348 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.859376 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:55.859429 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:55.876943 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.876972 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:55.876990 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:55.877001 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:55.903338 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:55.903372 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:55.924802 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:55.924829 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:55.980125 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:55.972792    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.973342    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.974941    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.975429    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.976926    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:55.972792    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.973342    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.974941    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.975429    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.976926    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:55.980146 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:55.980161 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:56.000597 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:56.000622 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:56.037964 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:56.037996 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:56.088371 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:56.088407 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:56.107606 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:56.107634 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:56.143658 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:56.143689 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:56.211928 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:56.211963 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 10:06:56.383880 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:58.883846 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:58.738791 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:58.739253 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:58.739345 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:58.758672 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:58.758750 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:58.778125 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:58.778188 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:58.795601 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.795623 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:58.795675 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:58.814211 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:58.814275 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:58.831764 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.831790 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:58.831834 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:58.849466 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:58.849539 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:58.867398 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.867427 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:58.867484 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:58.885191 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.885215 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:58.885234 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:58.885262 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:58.911583 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:58.911610 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:58.950860 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:58.950893 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:59.004297 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:59.004333 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:59.025861 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:59.025889 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:59.046944 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:59.046973 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:59.085764 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:59.085794 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:59.158468 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:59.158508 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:59.184434 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:59.184462 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:59.239706 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:59.232043    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.232545    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234123    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234548    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.235973    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:59.232043    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.232545    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234123    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234548    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.235973    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:59.239735 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:59.239748 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:01.760780 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:01.761288 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:01.761386 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:01.781655 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:01.781741 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:01.799466 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:01.799533 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:01.817102 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.817126 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:01.817181 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:01.834957 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:01.835044 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:01.852872 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.852900 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:01.852951 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:01.870948 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:01.871014 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:01.890001 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.890026 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:01.890072 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:01.907730 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.907750 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:01.907767 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:01.907777 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:01.980222 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:01.980260 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:02.006847 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:02.006888 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:02.047297 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:02.047329 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:02.101227 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:02.101276 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:02.124099 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:02.124129 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:02.161273 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:02.161308 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:02.187147 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:02.187182 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:02.242852 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:02.235381    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.235858    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237451    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237924    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.239421    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:02.235381    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.235858    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237451    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237924    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.239421    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:02.242879 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:02.242893 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:02.264021 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:02.264048 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:07:01.383265 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:03.883186 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:04.785494 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:04.785952 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:04.786043 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:04.805356 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:04.805452 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:04.823966 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:04.824039 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:04.841949 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.841973 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:04.842019 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:04.859692 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:04.859761 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:04.877317 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.877341 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:04.877383 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:04.895958 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:04.896035 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:04.913348 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.913378 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:04.913426 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:04.931401 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.931427 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:04.931448 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:04.931461 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:04.951477 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:04.951507 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:05.001983 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:05.002019 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:05.023585 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:05.023619 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:05.044516 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:05.044549 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:05.113154 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:05.113195 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:05.170412 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:05.162898    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.163461    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165001    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165501    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.167026    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:05.162898    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.163461    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165001    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165501    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.167026    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:05.170434 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:05.170447 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:05.210151 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:05.210186 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:05.248755 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:05.248781 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:05.275317 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:05.275352 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:07:05.883315 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:07.884030 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:10.383933 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:07.801587 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:07.802063 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:07.802166 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:07.821137 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:07.821214 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:07.839463 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:07.839532 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:07.856871 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.856893 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:07.856938 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:07.875060 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:07.875136 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:07.896448 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.896477 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:07.896537 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:07.914334 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:07.914402 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:07.931616 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.931638 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:07.931680 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:07.950247 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.950268 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:07.950285 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:07.950295 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:07.974572 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:07.974603 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:07.994800 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:07.994827 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:08.013535 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:08.013565 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:08.048711 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:08.048738 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:08.075000 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:08.075029 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:08.095656 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:08.095681 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:08.135706 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:08.135742 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:08.189749 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:08.189780 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:08.264988 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:08.265028 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:08.321799 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:08.314236    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.314718    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316206    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316648    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.318128    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:08.314236    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.314718    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316206    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316648    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.318128    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:10.822388 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:10.822855 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:10.822962 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:10.842220 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:10.842299 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:10.860390 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:10.860467 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:10.878544 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.878567 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:10.878613 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:10.897953 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:10.898016 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:10.916393 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.916419 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:10.916474 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:10.933957 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:10.934052 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:10.951873 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.951901 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:10.951957 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:10.970046 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.970073 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:10.970101 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:10.970116 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:11.026141 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:11.018729    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.019305    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.020844    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.021228    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.022826    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:11.018729    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.019305    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.020844    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.021228    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.022826    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:11.026162 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:11.026174 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:11.052155 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:11.052183 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:11.091637 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:11.091670 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:11.142651 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:11.142684 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:11.164003 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:11.164034 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:11.200186 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:11.200214 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:11.270805 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:11.270846 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:11.297260 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:11.297295 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:11.318423 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:11.318449 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:07:12.883177 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:15.383259 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:13.838395 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:13.838840 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:13.838937 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:13.858880 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:13.858955 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:13.877417 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:13.877476 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:13.895850 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.895876 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:13.895919 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:13.914237 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:13.914304 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:13.932185 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.932214 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:13.932265 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:13.949806 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:13.949876 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:13.966753 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.966779 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:13.966837 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:13.984061 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.984080 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:13.984103 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:13.984118 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:14.024518 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:14.024551 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:14.075810 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:14.075839 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:14.096801 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:14.096835 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:14.134271 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:14.134298 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:14.210356 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:14.210398 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:14.266888 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:14.259329    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.259828    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.261517    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.262045    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.263609    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:14.259329    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.259828    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.261517    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.262045    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.263609    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:14.266911 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:14.266931 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:14.286729 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:14.286765 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:14.312819 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:14.312853 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:14.339716 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:14.339746 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:16.861870 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:16.862360 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:16.862459 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:16.882051 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:16.882134 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:16.900321 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:16.900401 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:16.917983 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.918006 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:16.918057 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:16.935570 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:16.935650 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:16.953434 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.953455 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:16.953497 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:16.971207 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:16.971281 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:16.989882 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.989911 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:16.989957 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:17.006985 2163332 logs.go:282] 0 containers: []
	W0804 10:07:17.007007 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:17.007022 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:17.007034 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:17.081700 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:17.081741 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:17.107769 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:17.107798 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:17.129048 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:17.129074 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:17.170571 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:17.170601 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:17.190971 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:17.191000 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:17.227194 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:17.227225 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:17.283198 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:17.275311    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.275794    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277411    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277858    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.279344    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:17.275311    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.275794    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277411    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277858    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.279344    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:17.283220 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:17.283236 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:17.309760 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:17.309789 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:17.358841 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:17.358871 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:07:17.383386 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:19.383988 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:19.880139 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:19.880622 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:19.880709 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:19.901098 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:19.901189 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:19.921388 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:19.921455 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:19.941720 2163332 logs.go:282] 0 containers: []
	W0804 10:07:19.941751 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:19.941808 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:19.963719 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:19.963807 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:19.982285 2163332 logs.go:282] 0 containers: []
	W0804 10:07:19.982315 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:19.982375 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:20.005165 2163332 logs.go:282] 2 containers: [db8e2ca87b17 5321aae275b7]
	I0804 10:07:20.005272 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:20.024272 2163332 logs.go:282] 0 containers: []
	W0804 10:07:20.024296 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:20.024349 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:20.066617 2163332 logs.go:282] 0 containers: []
	W0804 10:07:20.066648 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:20.066662 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:20.066674 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:21.883344 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:23.883950 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:26.383273 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:28.383629 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:30.384083 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:32.883295 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:34.883588 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:37.383240 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:39.383490 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:41.805018 2163332 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.738325489s)
	W0804 10:07:41.805054 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:30.119105    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:40.119975    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:41.799069    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:59078->[::1]:8443: read: connection reset by peer"
	E0804 10:07:41.799640    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:41.801276    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:30.119105    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:40.119975    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:41.799069    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:59078->[::1]:8443: read: connection reset by peer"
	E0804 10:07:41.799640    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:41.801276    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:41.805062 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:41.805073 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	W0804 10:07:41.824568 2163332 logs.go:130] failed etcd [62ad65a28324]: command: /bin/bash -c "docker logs --tail 400 62ad65a28324" /bin/bash -c "docker logs --tail 400 62ad65a28324": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 62ad65a28324
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 62ad65a28324
	
	** /stderr **
	I0804 10:07:41.824590 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:41.824606 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:41.866655 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:41.866687 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:41.918542 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:41.918580 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:41.940196 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:41.940228 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:41.980124 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:41.980151 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:07:41.999188 2163332 logs.go:130] failed kube-apiserver [806e7ebaaed1]: command: /bin/bash -c "docker logs --tail 400 806e7ebaaed1" /bin/bash -c "docker logs --tail 400 806e7ebaaed1": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 806e7ebaaed1
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 806e7ebaaed1
	
	** /stderr **
	I0804 10:07:41.999208 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:41.999222 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:42.021383 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:42.021413 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	W0804 10:07:42.040097 2163332 logs.go:130] failed kube-controller-manager [5321aae275b7]: command: /bin/bash -c "docker logs --tail 400 5321aae275b7" /bin/bash -c "docker logs --tail 400 5321aae275b7": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 5321aae275b7
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 5321aae275b7
	
	** /stderr **
	I0804 10:07:42.040121 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:42.040140 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:42.121467 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:42.121517 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 10:07:41.384132 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:43.883489 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:44.649035 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:44.649550 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:44.649655 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:44.668446 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:44.668531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:44.686095 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:44.686171 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:44.705643 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.705669 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:44.705736 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:44.724574 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:44.724643 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:44.743534 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.743556 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:44.743599 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:44.762338 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:44.762422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:44.782440 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.782464 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:44.782511 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:44.800457 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.800482 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:44.800503 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:44.800519 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:44.828987 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:44.829024 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:44.851349 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:44.851380 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:44.891887 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:44.891921 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:44.942771 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:44.942809 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:44.963910 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:44.963936 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:44.982991 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:44.983018 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:45.019697 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:45.019724 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:45.098143 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:45.098181 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:45.156899 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:45.149340    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.149889    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151529    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151954    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.153458    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:45.149340    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.149889    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151529    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151954    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.153458    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:45.156923 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:45.156936 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:47.685272 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:47.685730 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:47.685821 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W0804 10:07:45.884049 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:48.383460 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:50.384087 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:47.705698 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:47.705776 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:47.723486 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:47.723559 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:47.740254 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.740277 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:47.740328 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:47.758844 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:47.758912 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:47.776147 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.776169 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:47.776209 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:47.794049 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:47.794120 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:47.810872 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.810892 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:47.810933 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:47.828618 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.828639 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:47.828655 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:47.828665 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:47.884561 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:47.876612    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.877177    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.878713    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.879149    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.880641    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:47.876612    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.877177    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.878713    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.879149    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.880641    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:47.884591 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:47.884608 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:47.910602 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:47.910632 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:47.931635 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:47.931662 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:47.974664 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:47.974698 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:48.026673 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:48.026707 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:48.047596 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:48.047624 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:48.084322 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:48.084354 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:48.162716 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:48.162754 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:48.189072 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:48.189103 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:50.709307 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:50.709704 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:50.709797 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:50.728631 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:50.728711 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:50.747056 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:50.747128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:50.764837 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.764861 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:50.764907 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:50.783351 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:50.783422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:50.801048 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.801068 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:50.801112 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:50.819524 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:50.819605 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:50.837558 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.837583 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:50.837635 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:50.855272 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.855300 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:50.855315 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:50.855334 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:50.875612 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:50.875640 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:50.895850 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:50.895876 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:50.976003 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:50.976045 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:51.002688 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:51.002724 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:51.045612 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:51.045644 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:51.098299 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:51.098331 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:51.135309 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:51.135342 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:51.191580 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:51.183846    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.184481    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186082    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186483    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.188015    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:51.183846    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.184481    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186082    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186483    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.188015    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:51.191601 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:51.191615 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:51.218895 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:51.218923 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	W0804 10:07:52.883308 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:54.883712 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:53.739326 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:53.739815 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:53.739915 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:53.760078 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:53.760152 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:53.778771 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:53.778848 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:53.796996 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.797026 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:53.797075 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:53.815962 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:53.816032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:53.833919 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.833942 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:53.833991 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:53.852829 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:53.852894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:53.870544 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.870572 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:53.870620 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:53.888900 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.888923 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:53.888941 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:53.888954 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:53.909456 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:53.909482 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:53.959416 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:53.959451 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:53.979376 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:53.979406 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:54.015365 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:54.015393 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:54.092580 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:54.092627 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:54.119325 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:54.119436 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:54.178242 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:54.170338    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.171010    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172560    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172976    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.174509    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:54.170338    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.171010    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172560    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172976    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.174509    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:54.178266 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:54.178288 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:54.205571 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:54.205602 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:54.226781 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:54.226811 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:56.772513 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:56.773019 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:56.773137 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:56.792596 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:56.792666 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:56.810823 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:56.810896 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:56.828450 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.828480 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:56.828532 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:56.847167 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:56.847237 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:56.866291 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.866315 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:56.866358 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:56.884828 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:56.884907 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:56.905059 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.905088 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:56.905134 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:56.923381 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.923417 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:56.923435 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:56.923447 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:56.943931 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:56.943957 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:56.986803 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:56.986835 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:57.013326 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:57.013360 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:57.068200 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:57.060866    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.061398    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.062981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.063498    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.064981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:57.060866    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.061398    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.062981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.063498    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.064981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:57.068220 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:57.068232 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:57.093915 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:57.093943 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:57.144935 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:57.144969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:57.166788 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:57.166813 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:57.188225 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:57.188254 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:57.224405 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:57.224433 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 10:07:56.883778 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:59.383176 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:59.805597 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:59.806058 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:59.806152 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:59.824866 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:59.824944 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:59.843663 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:59.843753 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:59.861286 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.861306 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:59.861356 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:59.880494 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:59.880573 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:59.898827 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.898851 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:59.898894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:59.917517 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:59.917584 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:59.935879 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.935906 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:59.935963 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:59.954233 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.954264 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:59.954284 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:59.954302 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:59.980238 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:59.980271 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:00.037175 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:00.029528    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.030067    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.031620    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.032023    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.033553    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:00.029528    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.030067    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.031620    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.032023    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.033553    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:00.037200 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:00.037215 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:00.079854 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:00.079889 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:00.117813 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:00.117842 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:00.199625 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:00.199671 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:00.225938 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:00.225969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:00.246825 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:00.246857 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:00.300311 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:00.300362 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:00.322075 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:00.322105 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:08:01.383269 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:02.842602 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:02.843031 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:02.843128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:02.862419 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:02.862503 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:02.881322 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:02.881409 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:02.902962 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.902986 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:02.903039 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:02.922238 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:02.922315 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:02.940312 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.940340 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:02.940391 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:02.960494 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:02.960580 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:02.978877 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.978915 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:02.978977 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:02.996894 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.996918 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:02.996937 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:02.996951 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:03.060369 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:03.060412 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:03.100294 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:03.100320 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:03.128232 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:03.128269 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:03.149215 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:03.149276 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:03.168809 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:03.168839 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:03.244969 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:03.245019 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:03.302519 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:03.294536    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.295054    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.296664    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.297129    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.298652    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:03.294536    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.295054    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.296664    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.297129    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.298652    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:03.302541 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:03.302555 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:03.328592 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:03.328621 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:03.349409 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:03.349436 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:05.892519 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:05.892926 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:05.893018 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:05.912863 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:05.912930 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:05.931765 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:05.931842 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:05.949624 2163332 logs.go:282] 0 containers: []
	W0804 10:08:05.949651 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:05.949706 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:05.969017 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:05.969096 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:05.987253 2163332 logs.go:282] 0 containers: []
	W0804 10:08:05.987279 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:05.987338 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:06.006096 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:06.006174 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:06.023866 2163332 logs.go:282] 0 containers: []
	W0804 10:08:06.023898 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:06.023955 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:06.041554 2163332 logs.go:282] 0 containers: []
	W0804 10:08:06.041574 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:06.041592 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:06.041603 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:06.078088 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:06.078114 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:06.160862 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:06.160907 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:06.187395 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:06.187425 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:06.243359 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:06.235931    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.236430    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.237921    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.238444    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.239969    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:06.235931    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.236430    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.237921    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.238444    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.239969    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:06.243387 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:06.243404 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:06.269689 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:06.269719 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:06.290404 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:06.290435 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:06.310595 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:06.310619 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:06.330304 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:06.330331 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:06.372930 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:06.372969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:08.923937 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:08.924354 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:08.924450 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:08.943688 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:08.943758 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:08.963008 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:08.963079 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:08.981372 2163332 logs.go:282] 0 containers: []
	W0804 10:08:08.981400 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:08.981453 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:08.999509 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:08.999592 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:09.017857 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.017881 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:09.017930 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:09.036581 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:09.036643 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:09.054584 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.054613 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:09.054666 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:09.072888 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.072924 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:09.072949 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:09.072965 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:09.149606 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:09.149645 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:09.178148 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:09.178185 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:09.222507 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:09.222544 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:09.275195 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:09.275235 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:09.299125 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:09.299159 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:09.319703 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:09.319747 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:09.346880 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:09.346922 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:09.404327 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:09.396630    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.397126    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.398704    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.399191    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.400813    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:09.396630    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.397126    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.398704    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.399191    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.400813    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:09.404352 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:09.404367 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:09.425425 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:09.425452 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:11.963472 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:11.963939 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:11.964032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:11.983012 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:11.983080 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:12.001567 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:12.001629 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:12.019335 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.019361 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:12.019428 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:12.038818 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:12.038893 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:12.056951 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.056978 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:12.057022 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:12.075232 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:12.075305 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:12.092737 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.092758 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:12.092800 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:12.109994 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.110024 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:12.110044 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:12.110055 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:12.166801 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:12.158687   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.159257   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.160910   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.161382   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.162961   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:12.158687   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.159257   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.160910   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.161382   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.162961   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:12.166825 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:12.166842 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:12.192505 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:12.192533 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:12.213260 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:12.213294 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:12.234230 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:12.234264 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:12.254032 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:12.254068 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:12.336496 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:12.336538 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:12.362829 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:12.362860 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:12.404783 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:12.404822 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:12.456932 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:12.456963 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:08:12.885483 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:08:14.998006 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:14.998459 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:14.998558 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:15.018639 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:15.018726 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:15.037594 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:15.037664 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:15.055647 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.055675 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:15.055720 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:15.073464 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:15.073538 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:15.091563 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.091588 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:15.091636 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:15.110381 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:15.110457 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:15.128744 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.128766 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:15.128811 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:15.147315 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.147336 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:15.147350 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:15.147369 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:15.167872 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:15.167908 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:15.211657 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:15.211690 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:15.233001 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:15.233026 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:15.252541 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:15.252580 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:15.291017 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:15.291044 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:15.316967 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:15.317004 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:15.343514 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:15.343543 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:15.394164 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:15.394201 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:15.475808 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:15.475847 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:15.532790 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:15.525410   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.525962   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527526   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527890   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.529344   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:15.525410   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.525962   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527526   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527890   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.529344   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:18.033614 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:18.034099 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:18.034190 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:18.053426 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:18.053519 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:18.072396 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:18.072461 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:18.090428 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.090453 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:18.090519 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:18.109580 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:18.109661 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:18.127869 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.127899 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:18.127954 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:18.146622 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:18.146695 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:18.165973 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.165995 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:18.166038 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:18.183152 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.183175 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:18.183190 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:18.183204 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:18.239841 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:18.232099   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.232612   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234166   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234591   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.236113   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:18.232099   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.232612   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234166   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234591   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.236113   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:18.239862 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:18.239874 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:18.260920 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:18.260946 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:18.304135 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:18.304170 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:18.356641 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:18.356679 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:18.376311 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:18.376341 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:18.460920 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:18.460965 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:18.488725 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:18.488755 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:18.509858 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:18.509894 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:18.546219 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:18.546248 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:21.073317 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:21.073860 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:21.073971 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:21.093222 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:21.093346 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:21.111951 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:21.112042 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:21.130287 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.130308 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:21.130359 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:21.148384 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:21.148471 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:21.166576 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.166604 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:21.166652 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:21.185348 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:21.185427 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:21.203596 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.203622 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:21.203681 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:21.221592 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.221620 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:21.221640 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:21.221652 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:21.277441 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:21.269692   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.270305   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.271725   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.272213   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.273739   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:21.269692   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.270305   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.271725   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.272213   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.273739   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:21.277466 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:21.277482 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:21.298481 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:21.298511 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:21.350381 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:21.350418 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:21.371474 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:21.371501 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:21.408284 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:21.408313 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:21.485994 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:21.486031 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:21.512310 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:21.512339 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:21.539196 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:21.539228 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:21.581887 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:21.581920 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:08:22.886436 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	W0804 10:08:25.383211 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:24.102885 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:24.103356 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:24.103454 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:24.123078 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:24.123144 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:24.141483 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:24.141545 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:24.159538 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.159565 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:24.159610 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:24.177499 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:24.177574 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:24.195218 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.195246 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:24.195289 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:24.213410 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:24.213501 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:24.231595 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.231619 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:24.231675 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:24.250451 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.250478 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:24.250497 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:24.250511 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:24.269653 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:24.269681 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:24.348982 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:24.349027 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:24.405452 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:24.397972   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.398529   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400132   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400600   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.402109   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:24.397972   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.398529   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400132   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400600   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.402109   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:24.405476 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:24.405491 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:24.431565 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:24.431593 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:24.469920 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:24.469948 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:24.495911 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:24.495942 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:24.516767 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:24.516796 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:24.559809 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:24.559846 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:24.612215 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:24.612251 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:27.134399 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:27.134902 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:27.135016 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:27.154460 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:27.154526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:27.172467 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:27.172537 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:27.190547 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.190571 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:27.190626 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:27.208406 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:27.208478 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:27.226270 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.226293 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:27.226347 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:27.244648 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:27.244710 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:27.262363 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.262384 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:27.262429 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:27.280761 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.280791 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:27.280811 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:27.280828 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:27.337516 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:27.329752   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.330367   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.331865   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.332331   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.333862   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:27.329752   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.330367   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.331865   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.332331   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.333862   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:27.337538 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:27.337554 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:27.383205 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:27.383237 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:27.402831 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:27.402863 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:27.439987 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:27.440016 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:27.467188 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:27.467220 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:27.488626 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:27.488651 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:27.538307 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:27.538341 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:27.558848 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:27.558875 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:27.640317 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:27.640360 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 10:08:27.383261 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:29.883318 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:30.169015 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:30.169492 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:30.169591 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:30.188919 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:30.189000 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:30.208903 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:30.208986 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:30.226974 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.227006 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:30.227061 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:30.245555 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:30.245625 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:30.263987 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.264013 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:30.264059 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:30.282944 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:30.283023 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:30.301744 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.301773 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:30.301834 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:30.320893 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.320919 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:30.320936 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:30.320951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:30.397888 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:30.397925 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:30.418812 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:30.418837 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:30.464089 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:30.464123 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:30.484745 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:30.484778 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:30.504805 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:30.504837 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:30.530475 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:30.530511 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:30.586445 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:30.578622   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.579233   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.580788   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.581197   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.582760   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:30.578622   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.579233   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.580788   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.581197   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.582760   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:30.586465 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:30.586478 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:30.613024 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:30.613054 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:30.666024 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:30.666060 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:08:31.883721 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:34.383160 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:33.203579 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:33.204060 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:33.204180 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:33.223272 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:33.223341 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:33.242111 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:33.242191 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:33.260564 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.260587 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:33.260632 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:33.279120 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:33.279198 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:33.297558 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.297581 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:33.297626 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:33.315911 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:33.315987 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:33.334504 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.334534 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:33.334594 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:33.352831 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.352855 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:33.352876 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:33.352891 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:33.431146 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:33.431188 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:33.457483 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:33.457516 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:33.512587 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:33.505280   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.505794   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507387   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507829   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.509409   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:33.505280   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.505794   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507387   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507829   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.509409   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:33.512614 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:33.512630 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:33.563154 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:33.563186 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:33.584703 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:33.584730 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:33.603831 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:33.603862 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:33.641549 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:33.641579 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:33.667027 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:33.667056 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:33.688258 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:33.688291 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:36.234388 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:36.234842 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:36.234932 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:36.253452 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:36.253531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:36.272517 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:36.272578 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:36.290793 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.290815 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:36.290859 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:36.309868 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:36.309951 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:36.328038 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.328065 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:36.328128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:36.346447 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:36.346526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:36.364698 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.364720 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:36.364774 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:36.382618 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.382649 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:36.382672 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:36.382687 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:36.460757 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:36.460795 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:36.517181 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:36.509281   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.509826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511400   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.513375   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:36.509281   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.509826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511400   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.513375   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:36.517202 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:36.517218 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:36.570857 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:36.570896 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:36.590896 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:36.590929 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:36.616290 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:36.616323 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:36.643271 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:36.643298 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:36.663678 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:36.663704 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:36.708665 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:36.708695 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:36.729524 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:36.729551 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:08:36.383928 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:38.883516 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:39.267469 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:39.267990 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:39.268120 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:39.287780 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:39.287877 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:39.307153 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:39.307248 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:39.326719 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.326752 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:39.326810 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:39.345319 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:39.345387 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:39.363424 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.363455 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:39.363511 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:39.381746 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:39.381825 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:39.399785 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.399809 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:39.399862 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:39.419064 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.419095 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:39.419121 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:39.419136 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:39.501950 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:39.501998 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:39.528491 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:39.528525 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:39.585466 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:39.578061   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.578577   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580045   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580462   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.581949   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:39.578061   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.578577   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580045   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580462   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.581949   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:39.585497 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:39.585518 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:39.611559 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:39.611590 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:39.632402 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:39.632438 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:39.677721 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:39.677758 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:39.728453 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:39.728487 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:39.752029 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:39.752060 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:39.772376 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:39.772408 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:42.311175 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:42.311726 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:42.311836 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:42.331694 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:42.331761 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:42.350128 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:42.350202 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:42.368335 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.368358 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:42.368411 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:42.385942 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:42.386020 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:42.403768 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.403788 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:42.403840 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:42.422612 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:42.422679 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:42.439585 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.439609 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:42.439659 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:42.457208 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.457229 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:42.457263 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:42.457279 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:42.535545 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:42.535578 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:42.561612 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:42.561641 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:42.616811 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:42.609048   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.609673   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611215   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611642   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.613094   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:42.609048   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.609673   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611215   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611642   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.613094   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:42.616832 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:42.616847 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:42.643211 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:42.643240 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:42.663882 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:42.663910 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:42.683025 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:42.683052 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:42.722746 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:42.722772 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:42.743550 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:42.743589 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:42.788986 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:42.789023 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:45.340596 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:45.341080 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:45.343076 2163332 out.go:201] 
	W0804 10:08:45.344232 2163332 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0804 10:08:45.344248 2163332 out.go:270] * 
	W0804 10:08:45.346020 2163332 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 10:08:45.347852 2163332 out.go:201] 
	W0804 10:08:40.883920 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:42.884060 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:45.384074 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:47.883235 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:50.383116 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:52.383162 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:54.383410 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:56.383810 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:58.883290 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:00.883650 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:03.383190 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:05.383617 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:07.384051 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:09.883346 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:11.883783 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:13.884208 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:16.383435 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:18.383891 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:20.883429 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:22.884027 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:25.383556 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:27.883164 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:29.883548 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:31.883955 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:34.383514 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:36.883247 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:38.883512 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:40.884109 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:43.383400 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:09:45.383376 2149628 node_ready.go:38] duration metric: took 6m0.000813638s for node "no-preload-499486" to be "Ready" ...
	I0804 10:09:45.385759 2149628 out.go:201] 
	W0804 10:09:45.386973 2149628 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W0804 10:09:45.386995 2149628 out.go:270] * 
	W0804 10:09:45.389624 2149628 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 10:09:45.390891 2149628 out.go:201] 
	
	
	==> Docker <==
	Aug 04 10:03:42 no-preload-499486 cri-dockerd[1365]: time="2025-08-04T10:03:42Z" level=info msg="Docker cri networking managed by network plugin cni"
	Aug 04 10:03:42 no-preload-499486 cri-dockerd[1365]: time="2025-08-04T10:03:42Z" level=info msg="Setting cgroupDriver cgroupfs"
	Aug 04 10:03:42 no-preload-499486 cri-dockerd[1365]: time="2025-08-04T10:03:42Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Aug 04 10:03:42 no-preload-499486 cri-dockerd[1365]: time="2025-08-04T10:03:42Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Aug 04 10:03:42 no-preload-499486 cri-dockerd[1365]: time="2025-08-04T10:03:42Z" level=info msg="Start cri-dockerd grpc backend"
	Aug 04 10:03:42 no-preload-499486 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Aug 04 10:03:45 no-preload-499486 cri-dockerd[1365]: time="2025-08-04T10:03:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8e25ebb8a89d445633ee72689dd9126eae7afe58d9a207dbe2cdc5da1c82e7c5/resolv.conf as [nameserver 192.168.94.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 10:03:45 no-preload-499486 cri-dockerd[1365]: time="2025-08-04T10:03:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c8e64888584066fdfe6acecc56b1467a84c162997e4f0b1a939859400ab4a5f/resolv.conf as [nameserver 192.168.94.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 10:03:45 no-preload-499486 cri-dockerd[1365]: time="2025-08-04T10:03:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/26755274d895161ffe5b3f341bb7944f31daecb44dda61932240318d73b09b9c/resolv.conf as [nameserver 192.168.94.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Aug 04 10:03:45 no-preload-499486 cri-dockerd[1365]: time="2025-08-04T10:03:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/faaa3a488dc04608657ace902b23aff9e53e1d14755fdf70c32d9c4a86ae6ec6/resolv.conf as [nameserver 192.168.94.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 10:03:45 no-preload-499486 dockerd[1060]: time="2025-08-04T10:03:45.970130751Z" level=info msg="ignoring event" container=fc533eec1834b08c163742338f45821b5f02c6c5578ebe0fa5487906728547c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:04:07 no-preload-499486 dockerd[1060]: time="2025-08-04T10:04:07.509440559Z" level=info msg="ignoring event" container=835331562e21d7f94c792e7e547dd630d261e361d3dbf1c95186b90631d45ab4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:04:08 no-preload-499486 dockerd[1060]: time="2025-08-04T10:04:08.536777903Z" level=info msg="ignoring event" container=6c7c3e8e5a5a316e53d6dfbe663ac4dca13a60be5ece3da5dc2247e32f82d17a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:04:08 no-preload-499486 dockerd[1060]: time="2025-08-04T10:04:08.805380544Z" level=info msg="ignoring event" container=465ed5c63105c622faf628dc45dffc004b55d09148a84a0c45ec2f8a27c97fbf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:04:39 no-preload-499486 dockerd[1060]: time="2025-08-04T10:04:39.818927796Z" level=info msg="ignoring event" container=0595640f46489eb8407e6e761b084aaf6097c9c319d96bc72e2a6da471c5d644 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:04:44 no-preload-499486 dockerd[1060]: time="2025-08-04T10:04:44.826174830Z" level=info msg="ignoring event" container=c53148ebe39d8e04e877760553c72fbbb0efca7dc09fc1550c0d193752988ad5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:04:46 no-preload-499486 dockerd[1060]: time="2025-08-04T10:04:46.743926255Z" level=info msg="ignoring event" container=c90ac788092b4d99962cf322dca6016fcbab4b4a8a55f82e1817c83b0f7d9215 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:28 no-preload-499486 dockerd[1060]: time="2025-08-04T10:05:28.445977627Z" level=info msg="ignoring event" container=624b9721d7e89385a14cf7a113afd2059fd713021c967546422f8d3e449b1c07 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:33 no-preload-499486 dockerd[1060]: time="2025-08-04T10:05:33.808565031Z" level=info msg="ignoring event" container=86926cfa626f66ab359d1d7b13dfaa8c7749178320dbff42dccd2306e7130172 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:39 no-preload-499486 dockerd[1060]: time="2025-08-04T10:05:39.468564300Z" level=info msg="ignoring event" container=7c4f93cb4bfbd43195edf99e929820bd4cd2ff17c1c7e1820fc35244264f90eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:06:39 no-preload-499486 dockerd[1060]: time="2025-08-04T10:06:39.443989985Z" level=info msg="ignoring event" container=b0de8a87430e54e04bae9e0fe793e3fda728c66cafdbbb857dfa8b70b7b849a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:06:41 no-preload-499486 dockerd[1060]: time="2025-08-04T10:06:41.920345198Z" level=info msg="ignoring event" container=95273882a0ba3beeec00a1ee16fc2e13f9dc7d28771bbf35eeed20bc1e617760 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:06:56 no-preload-499486 dockerd[1060]: time="2025-08-04T10:06:56.807457292Z" level=info msg="ignoring event" container=9ce95901ec688dadabbfeba65d8a96e0cd422aa6483ce4093631e0769ecec314 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:08:23 no-preload-499486 dockerd[1060]: time="2025-08-04T10:08:23.128503844Z" level=info msg="ignoring event" container=152aef9e02ab4ddae450a3b16f379f3b222a44743fca7913d5d483269f9dfc2b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:08:31 no-preload-499486 dockerd[1060]: time="2025-08-04T10:08:31.608495511Z" level=info msg="ignoring event" container=8fb3f2292ab14a56a1592fff79c30568329e27afc3d74f06f288f788a6b3c3a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8fb3f2292ab14       9ad783615e1bc       About a minute ago   Exited              kube-controller-manager   10                  faaa3a488dc04       kube-controller-manager-no-preload-499486
	152aef9e02ab4       d85eea91cc41d       About a minute ago   Exited              kube-apiserver            10                  26755274d8951       kube-apiserver-no-preload-499486
	9ce95901ec688       1e30c0b1e9b99       2 minutes ago        Exited              etcd                      10                  8e25ebb8a89d4       etcd-no-preload-499486
	f9db373fc015a       21d34a2aeacf5       6 minutes ago        Running             kube-scheduler            1                   5c8e648885840       kube-scheduler-no-preload-499486
	2a1c20b2ffee8       21d34a2aeacf5       11 minutes ago       Exited              kube-scheduler            0                   d2b1bfd452832       kube-scheduler-no-preload-499486
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:09:46.507818    4108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:09:46.508295    4108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:09:46.509810    4108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:09:46.510254    4108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:09:46.511832    4108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.003976] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-30ac57a033af
	[  +0.000006] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +3.807738] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000008] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.000000] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.251962] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-30ac57a033af
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-30ac57a033af
	[  +0.000007] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.000000] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +7.935446] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000007] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000034] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.003972] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000005] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[ +23.237968] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 e9 0e 42 0b 64 08 06
	[  +0.000446] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 d5 e2 93 f6 db 08 06
	[Aug 4 10:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da a7 c8 ad 52 b3 08 06
	[  +0.000606] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff da d5 10 fe 4e 73 08 06
	
	
	==> etcd [9ce95901ec68] <==
	flag provided but not defined: -proxy-refresh-interval
	Usage:
	
	  etcd [flags]
	    Start an etcd server.
	
	  etcd --version
	    Show the version of etcd.
	
	  etcd -h | --help
	    Show the help information about etcd.
	
	  etcd --config-file
	    Path to the server configuration file. Note that if a configuration file is provided, other command line flags and environment variables will be ignored.
	
	  etcd gateway
	    Run the stateless pass-through etcd TCP connection forwarding proxy.
	
	  etcd grpc-proxy
	    Run the stateless etcd v3 gRPC L7 reverse proxy.
	
	
	
	==> kernel <==
	 10:09:46 up 1 day, 18:51,  0 users,  load average: 0.51, 1.16, 1.63
	Linux no-preload-499486 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [152aef9e02ab] <==
	W0804 10:08:03.096166       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:03.096166       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 10:08:03.097592       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 10:08:03.104910       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 10:08:03.111991       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 10:08:03.112016       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 10:08:03.112300       1 instance.go:232] Using reconciler: lease
	W0804 10:08:03.113290       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:03.113330       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:04.097512       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:04.097512       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:04.114375       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:05.572314       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:05.793357       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:05.976928       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:07.661013       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:07.887566       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:08.116038       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:11.427424       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:12.346927       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:12.434797       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:17.134398       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:17.740491       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:19.437205       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 10:08:23.112942       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [8fb3f2292ab1] <==
	I0804 10:08:10.621451       1 serving.go:386] Generated self-signed cert in-memory
	I0804 10:08:11.574479       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 10:08:11.574505       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 10:08:11.575964       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 10:08:11.576084       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 10:08:11.576934       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 10:08:11.577185       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 10:08:31.580108       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.94.2:8443/healthz\": dial tcp 192.168.94.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [2a1c20b2ffee] <==
	E0804 10:02:28.175749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:02:31.304161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.94.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:02:32.791509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:02:34.007548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.94.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 10:02:40.294146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.94.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:02:43.128115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.94.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 10:02:45.421355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 10:02:50.083757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.94.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 10:02:51.361497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.94.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 10:03:05.497126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.94.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 10:03:08.537516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.94.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 10:03:11.097373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 10:03:11.729593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.94.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 10:03:12.801646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.94.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:03:17.035915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:03:18.849345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.94.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 10:03:23.883368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.94.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 10:03:24.360764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 10:03:24.447406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:03:25.585024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.94.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:03:26.613910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.94.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 10:03:28.018647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.94.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:03:28.621818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.94.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 10:03:34.452113       1 server.go:274] "handlers are not fully synchronized" err="context canceled"
	E0804 10:03:34.452246       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f9db373fc015] <==
	E0804 10:08:37.061510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.94.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:08:40.048951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.94.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:08:43.898109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.94.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 10:08:44.453657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:08:54.180421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.94.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 10:08:57.292945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:08:57.615618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 10:09:01.117718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.94.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 10:09:01.312899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.94.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 10:09:04.727727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.94.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 10:09:06.112902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.94.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 10:09:12.083280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.94.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 10:09:14.002969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 10:09:17.303032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 10:09:21.828884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.94.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 10:09:21.981834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.94.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:09:22.651747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.94.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 10:09:25.450957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.94.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:09:30.668250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:09:35.906839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.94.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 10:09:36.872452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.94.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 10:09:37.396054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.94.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:09:38.101173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:09:41.976932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.94.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 10:09:43.941900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	
	
	==> kubelet <==
	Aug 04 10:09:30 no-preload-499486 kubelet[1550]: E0804 10:09:30.685920    1550 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-no-preload-499486_kube-system(f4c9aec0fc04dec0ce14ce1fda478878)\"" pod="kube-system/kube-apiserver-no-preload-499486" podUID="f4c9aec0fc04dec0ce14ce1fda478878"
	Aug 04 10:09:31 no-preload-499486 kubelet[1550]: E0804 10:09:31.287913    1550 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.94.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dno-preload-499486&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Aug 04 10:09:31 no-preload-499486 kubelet[1550]: E0804 10:09:31.685311    1550 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"no-preload-499486\" not found" node="no-preload-499486"
	Aug 04 10:09:31 no-preload-499486 kubelet[1550]: I0804 10:09:31.685420    1550 scope.go:117] "RemoveContainer" containerID="8fb3f2292ab14a56a1592fff79c30568329e27afc3d74f06f288f788a6b3c3a9"
	Aug 04 10:09:31 no-preload-499486 kubelet[1550]: E0804 10:09:31.685596    1550 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-no-preload-499486_kube-system(a4b1d6b4ed5bdfde5a36a79a8a11f1a7)\"" pod="kube-system/kube-controller-manager-no-preload-499486" podUID="a4b1d6b4ed5bdfde5a36a79a8a11f1a7"
	Aug 04 10:09:32 no-preload-499486 kubelet[1550]: E0804 10:09:32.832080    1550 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.94.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.94.2:8443: connect: connection refused" event="&Event{ObjectMeta:{no-preload-499486.18588837015167f5  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:no-preload-499486,UID:no-preload-499486,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node no-preload-499486 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:no-preload-499486,},FirstTimestamp:2025-08-04 10:03:44.687499253 +0000 UTC m=+0.105529320,LastTimestamp:2025-08-04 10:03:44.687499253 +0000 UTC m=+0.105529320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:no-preload-499486,}"
	Aug 04 10:09:33 no-preload-499486 kubelet[1550]: I0804 10:09:33.137206    1550 kubelet_node_status.go:75] "Attempting to register node" node="no-preload-499486"
	Aug 04 10:09:33 no-preload-499486 kubelet[1550]: E0804 10:09:33.137654    1550 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.94.2:8443/api/v1/nodes\": dial tcp 192.168.94.2:8443: connect: connection refused" node="no-preload-499486"
	Aug 04 10:09:34 no-preload-499486 kubelet[1550]: E0804 10:09:34.128970    1550 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.94.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/no-preload-499486?timeout=10s\": dial tcp 192.168.94.2:8443: connect: connection refused" interval="7s"
	Aug 04 10:09:34 no-preload-499486 kubelet[1550]: E0804 10:09:34.723529    1550 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"no-preload-499486\" not found"
	Aug 04 10:09:35 no-preload-499486 kubelet[1550]: E0804 10:09:35.684941    1550 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"no-preload-499486\" not found" node="no-preload-499486"
	Aug 04 10:09:35 no-preload-499486 kubelet[1550]: I0804 10:09:35.685043    1550 scope.go:117] "RemoveContainer" containerID="9ce95901ec688dadabbfeba65d8a96e0cd422aa6483ce4093631e0769ecec314"
	Aug 04 10:09:35 no-preload-499486 kubelet[1550]: E0804 10:09:35.685266    1550 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-no-preload-499486_kube-system(c3193c4a9a9a9175b95883d7fe1bad87)\"" pod="kube-system/etcd-no-preload-499486" podUID="c3193c4a9a9a9175b95883d7fe1bad87"
	Aug 04 10:09:40 no-preload-499486 kubelet[1550]: I0804 10:09:40.139320    1550 kubelet_node_status.go:75] "Attempting to register node" node="no-preload-499486"
	Aug 04 10:09:40 no-preload-499486 kubelet[1550]: E0804 10:09:40.139768    1550 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.94.2:8443/api/v1/nodes\": dial tcp 192.168.94.2:8443: connect: connection refused" node="no-preload-499486"
	Aug 04 10:09:40 no-preload-499486 kubelet[1550]: E0804 10:09:40.444957    1550 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.94.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Aug 04 10:09:41 no-preload-499486 kubelet[1550]: E0804 10:09:41.130246    1550 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.94.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/no-preload-499486?timeout=10s\": dial tcp 192.168.94.2:8443: connect: connection refused" interval="7s"
	Aug 04 10:09:42 no-preload-499486 kubelet[1550]: E0804 10:09:42.685167    1550 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"no-preload-499486\" not found" node="no-preload-499486"
	Aug 04 10:09:42 no-preload-499486 kubelet[1550]: I0804 10:09:42.685288    1550 scope.go:117] "RemoveContainer" containerID="152aef9e02ab4ddae450a3b16f379f3b222a44743fca7913d5d483269f9dfc2b"
	Aug 04 10:09:42 no-preload-499486 kubelet[1550]: E0804 10:09:42.685448    1550 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-no-preload-499486_kube-system(f4c9aec0fc04dec0ce14ce1fda478878)\"" pod="kube-system/kube-apiserver-no-preload-499486" podUID="f4c9aec0fc04dec0ce14ce1fda478878"
	Aug 04 10:09:42 no-preload-499486 kubelet[1550]: E0804 10:09:42.833004    1550 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.94.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.94.2:8443: connect: connection refused" event="&Event{ObjectMeta:{no-preload-499486.18588837015167f5  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:no-preload-499486,UID:no-preload-499486,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node no-preload-499486 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:no-preload-499486,},FirstTimestamp:2025-08-04 10:03:44.687499253 +0000 UTC m=+0.105529320,LastTimestamp:2025-08-04 10:03:44.687499253 +0000 UTC m=+0.105529320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:no-preload-499486,}"
	Aug 04 10:09:44 no-preload-499486 kubelet[1550]: E0804 10:09:44.685061    1550 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"no-preload-499486\" not found" node="no-preload-499486"
	Aug 04 10:09:44 no-preload-499486 kubelet[1550]: I0804 10:09:44.685174    1550 scope.go:117] "RemoveContainer" containerID="8fb3f2292ab14a56a1592fff79c30568329e27afc3d74f06f288f788a6b3c3a9"
	Aug 04 10:09:44 no-preload-499486 kubelet[1550]: E0804 10:09:44.685342    1550 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-no-preload-499486_kube-system(a4b1d6b4ed5bdfde5a36a79a8a11f1a7)\"" pod="kube-system/kube-controller-manager-no-preload-499486" podUID="a4b1d6b4ed5bdfde5a36a79a8a11f1a7"
	Aug 04 10:09:44 no-preload-499486 kubelet[1550]: E0804 10:09:44.724206    1550 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"no-preload-499486\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-499486 -n no-preload-499486
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-499486 -n no-preload-499486: exit status 2 (264.28928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "no-preload-499486" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (371.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (254.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-768931 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p newest-cni-768931 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0: exit status 80 (4m12.711077361s)

                                                
                                                
-- stdout --
	* [newest-cni-768931] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-768931" primary control-plane node in "newest-cni-768931" cluster
	* Pulling base image v0.0.47-1753871403-21198 ...
	* Restarting existing docker container for "newest-cni-768931" ...
	* Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 10:04:32.687485 2163332 out.go:345] Setting OutFile to fd 1 ...
	I0804 10:04:32.687601 2163332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 10:04:32.687610 2163332 out.go:358] Setting ErrFile to fd 2...
	I0804 10:04:32.687614 2163332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 10:04:32.687787 2163332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 10:04:32.688302 2163332 out.go:352] Setting JSON to false
	I0804 10:04:32.689384 2163332 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":153962,"bootTime":1754147911,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 10:04:32.689473 2163332 start.go:140] virtualization: kvm guest
	I0804 10:04:32.691276 2163332 out.go:177] * [newest-cni-768931] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 10:04:32.692852 2163332 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 10:04:32.692888 2163332 notify.go:220] Checking for updates...
	I0804 10:04:32.695015 2163332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 10:04:32.696142 2163332 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:32.697215 2163332 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 10:04:32.698321 2163332 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 10:04:32.699270 2163332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 10:04:32.700616 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:32.701052 2163332 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 10:04:32.723805 2163332 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 10:04:32.723883 2163332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 10:04:32.778232 2163332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 10:04:32.768372933 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 10:04:32.778341 2163332 docker.go:318] overlay module found
	I0804 10:04:32.779801 2163332 out.go:177] * Using the docker driver based on existing profile
	I0804 10:04:32.780788 2163332 start.go:304] selected driver: docker
	I0804 10:04:32.780822 2163332 start.go:918] validating driver "docker" against &{Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:32.780895 2163332 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 10:04:32.781839 2163332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 10:04:32.827839 2163332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 10:04:32.819484271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 10:04:32.828202 2163332 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0804 10:04:32.828229 2163332 cni.go:84] Creating CNI manager for ""
	I0804 10:04:32.828284 2163332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 10:04:32.828323 2163332 start.go:348] cluster config:
	{Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:32.830455 2163332 out.go:177] * Starting "newest-cni-768931" primary control-plane node in "newest-cni-768931" cluster
	I0804 10:04:32.831301 2163332 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 10:04:32.832264 2163332 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 10:04:32.833160 2163332 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 10:04:32.833198 2163332 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 10:04:32.833213 2163332 cache.go:56] Caching tarball of preloaded images
	I0804 10:04:32.833291 2163332 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 10:04:32.833335 2163332 preload.go:172] Found /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 10:04:32.833346 2163332 cache.go:59] Finished verifying existence of preloaded tar for v1.34.0-beta.0 on docker
	I0804 10:04:32.833466 2163332 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/config.json ...
	I0804 10:04:32.853043 2163332 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 10:04:32.853066 2163332 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 10:04:32.853089 2163332 cache.go:230] Successfully downloaded all kic artifacts
	I0804 10:04:32.853130 2163332 start.go:360] acquireMachinesLock for newest-cni-768931: {Name:mk60747b86b31a8b440009760f939cd98b70b1b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 10:04:32.853200 2163332 start.go:364] duration metric: took 46.728µs to acquireMachinesLock for "newest-cni-768931"
	I0804 10:04:32.853224 2163332 start.go:96] Skipping create...Using existing machine configuration
	I0804 10:04:32.853234 2163332 fix.go:54] fixHost starting: 
	I0804 10:04:32.853483 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:32.870192 2163332 fix.go:112] recreateIfNeeded on newest-cni-768931: state=Stopped err=<nil>
	W0804 10:04:32.870218 2163332 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 10:04:32.871722 2163332 out.go:177] * Restarting existing docker container for "newest-cni-768931" ...
	I0804 10:04:32.872698 2163332 cli_runner.go:164] Run: docker start newest-cni-768931
	I0804 10:04:33.099718 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:33.118449 2163332 kic.go:430] container "newest-cni-768931" state is running.
	I0804 10:04:33.118905 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:33.137343 2163332 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/config.json ...
	I0804 10:04:33.137542 2163332 machine.go:93] provisionDockerMachine start ...
	I0804 10:04:33.137597 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:33.155160 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:33.155419 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:33.155437 2163332 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 10:04:33.156072 2163332 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58734->127.0.0.1:33169: read: connection reset by peer
	I0804 10:04:36.284896 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-768931
	
	I0804 10:04:36.284952 2163332 ubuntu.go:169] provisioning hostname "newest-cni-768931"
	I0804 10:04:36.285030 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.302808 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.303033 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.303047 2163332 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-768931 && echo "newest-cni-768931" | sudo tee /etc/hostname
	I0804 10:04:36.436070 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-768931
	
	I0804 10:04:36.436155 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.453360 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.453580 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.453597 2163332 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-768931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-768931/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-768931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 10:04:36.577177 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 10:04:36.577204 2163332 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 10:04:36.577269 2163332 ubuntu.go:177] setting up certificates
	I0804 10:04:36.577284 2163332 provision.go:84] configureAuth start
	I0804 10:04:36.577338 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:36.594945 2163332 provision.go:143] copyHostCerts
	I0804 10:04:36.595024 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 10:04:36.595052 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 10:04:36.595122 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 10:04:36.595229 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 10:04:36.595240 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 10:04:36.595279 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 10:04:36.595353 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 10:04:36.595363 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 10:04:36.595397 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 10:04:36.595465 2163332 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.newest-cni-768931 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-768931]
	I0804 10:04:36.675231 2163332 provision.go:177] copyRemoteCerts
	I0804 10:04:36.675299 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 10:04:36.675408 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.693281 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:36.786243 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 10:04:36.808201 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 10:04:36.829564 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 10:04:36.851320 2163332 provision.go:87] duration metric: took 274.022098ms to configureAuth
	I0804 10:04:36.851348 2163332 ubuntu.go:193] setting minikube options for container-runtime
	I0804 10:04:36.851551 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:36.851596 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.868506 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.868714 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.868725 2163332 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 10:04:36.993642 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 10:04:36.993669 2163332 ubuntu.go:71] root file system type: overlay
	I0804 10:04:36.993814 2163332 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 10:04:36.993894 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.011512 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:37.011804 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:37.011909 2163332 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 10:04:37.144143 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 10:04:37.144254 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.163872 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:37.164133 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:37.164159 2163332 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 10:04:37.294409 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 10:04:37.294438 2163332 machine.go:96] duration metric: took 4.156880869s to provisionDockerMachine
	I0804 10:04:37.294451 2163332 start.go:293] postStartSetup for "newest-cni-768931" (driver="docker")
	I0804 10:04:37.294467 2163332 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 10:04:37.294538 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 10:04:37.294594 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.312083 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.402431 2163332 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 10:04:37.405677 2163332 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 10:04:37.405711 2163332 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 10:04:37.405722 2163332 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 10:04:37.405732 2163332 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 10:04:37.405748 2163332 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 10:04:37.405809 2163332 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 10:04:37.405901 2163332 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 10:04:37.406013 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 10:04:37.414129 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 10:04:37.436137 2163332 start.go:296] duration metric: took 141.67054ms for postStartSetup
	I0804 10:04:37.436224 2163332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 10:04:37.436265 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.453687 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.541885 2163332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 10:04:37.546057 2163332 fix.go:56] duration metric: took 4.692814355s for fixHost
	I0804 10:04:37.546084 2163332 start.go:83] releasing machines lock for "newest-cni-768931", held for 4.692869693s
	I0804 10:04:37.546159 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:37.563070 2163332 ssh_runner.go:195] Run: cat /version.json
	I0804 10:04:37.563126 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.563138 2163332 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 10:04:37.563203 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.580936 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.581156 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.740866 2163332 ssh_runner.go:195] Run: systemctl --version
	I0804 10:04:37.745223 2163332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 10:04:37.749326 2163332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 10:04:37.766095 2163332 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 10:04:37.766176 2163332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 10:04:37.773788 2163332 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 10:04:37.773820 2163332 start.go:495] detecting cgroup driver to use...
	I0804 10:04:37.773849 2163332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 10:04:37.773948 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 10:04:37.788117 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:38.201785 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 10:04:38.211955 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 10:04:38.221176 2163332 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 10:04:38.221223 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 10:04:38.230298 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 10:04:38.238908 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 10:04:38.247614 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 10:04:38.256328 2163332 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 10:04:38.264446 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 10:04:38.273173 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 10:04:38.282132 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 10:04:38.290867 2163332 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 10:04:38.298323 2163332 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 10:04:38.305902 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:38.392109 2163332 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 10:04:38.481905 2163332 start.go:495] detecting cgroup driver to use...
	I0804 10:04:38.481959 2163332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 10:04:38.482006 2163332 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 10:04:38.492886 2163332 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 10:04:38.492964 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 10:04:38.507193 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 10:04:38.524383 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:38.965725 2163332 ssh_runner.go:195] Run: which cri-dockerd
	I0804 10:04:38.969614 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 10:04:38.977908 2163332 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 10:04:38.993935 2163332 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 10:04:39.070708 2163332 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 10:04:39.151070 2163332 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 10:04:39.151179 2163332 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 10:04:39.167734 2163332 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 10:04:39.179347 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.254327 2163332 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 10:04:39.556127 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 10:04:39.566948 2163332 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0804 10:04:39.577711 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 10:04:39.587256 2163332 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 10:04:39.666843 2163332 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 10:04:39.760652 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.840823 2163332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 10:04:39.853363 2163332 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 10:04:39.863091 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.939093 2163332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 10:04:39.998099 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 10:04:40.009070 2163332 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 10:04:40.009141 2163332 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 10:04:40.012496 2163332 start.go:563] Will wait 60s for crictl version
	I0804 10:04:40.012547 2163332 ssh_runner.go:195] Run: which crictl
	I0804 10:04:40.015480 2163332 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 10:04:40.047607 2163332 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 10:04:40.047667 2163332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 10:04:40.071117 2163332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 10:04:40.096346 2163332 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 10:04:40.096430 2163332 cli_runner.go:164] Run: docker network inspect newest-cni-768931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 10:04:40.113799 2163332 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0804 10:04:40.117316 2163332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 10:04:40.128718 2163332 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0804 10:04:40.129838 2163332 kubeadm.go:875] updating cluster {Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 10:04:40.130050 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:40.510582 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:40.900777 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:41.302831 2163332 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 10:04:41.303034 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:41.705389 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:42.114511 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:42.516831 2163332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 10:04:42.537600 2163332 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 10:04:42.537629 2163332 docker.go:633] Images already preloaded, skipping extraction
	I0804 10:04:42.537693 2163332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 10:04:42.556805 2163332 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 10:04:42.556830 2163332 cache_images.go:85] Images are preloaded, skipping loading
	I0804 10:04:42.556843 2163332 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0-beta.0 docker true true} ...
	I0804 10:04:42.556981 2163332 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-768931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 10:04:42.557048 2163332 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 10:04:42.603960 2163332 cni.go:84] Creating CNI manager for ""
	I0804 10:04:42.603991 2163332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 10:04:42.604000 2163332 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0804 10:04:42.604024 2163332 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-768931 NodeName:newest-cni-768931 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 10:04:42.604182 2163332 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-768931"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 10:04:42.604258 2163332 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 10:04:42.612607 2163332 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 10:04:42.612659 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 10:04:42.620777 2163332 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0804 10:04:42.637111 2163332 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 10:04:42.652929 2163332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2300 bytes)
	I0804 10:04:42.669016 2163332 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0804 10:04:42.672189 2163332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 10:04:42.681993 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:42.752820 2163332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 10:04:42.766032 2163332 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931 for IP: 192.168.76.2
	I0804 10:04:42.766057 2163332 certs.go:194] generating shared ca certs ...
	I0804 10:04:42.766079 2163332 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:42.766266 2163332 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 10:04:42.766336 2163332 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 10:04:42.766352 2163332 certs.go:256] generating profile certs ...
	I0804 10:04:42.766461 2163332 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/client.key
	I0804 10:04:42.766532 2163332 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key.a5c16e02
	I0804 10:04:42.766586 2163332 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.key
	I0804 10:04:42.766711 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 10:04:42.766752 2163332 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 10:04:42.766766 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 10:04:42.766803 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 10:04:42.766837 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 10:04:42.766912 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 10:04:42.766983 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 10:04:42.767635 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 10:04:42.790829 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 10:04:42.814436 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 10:04:42.873985 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 10:04:42.962257 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 10:04:42.987204 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 10:04:43.010504 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 10:04:43.032579 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 10:04:43.054052 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 10:04:43.074805 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 10:04:43.095457 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 10:04:43.116289 2163332 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 10:04:43.132026 2163332 ssh_runner.go:195] Run: openssl version
	I0804 10:04:43.137020 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 10:04:43.145170 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.148316 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.148363 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.154461 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 10:04:43.162454 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 10:04:43.170868 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.174158 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.174205 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.180335 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 10:04:43.188046 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 10:04:43.196142 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.199374 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.199418 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.205534 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 10:04:43.213018 2163332 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 10:04:43.215961 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 10:04:43.221714 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 10:04:43.227380 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 10:04:43.233506 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 10:04:43.239207 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 10:04:43.245036 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 10:04:43.250834 2163332 kubeadm.go:392] StartCluster: {Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:43.250956 2163332 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 10:04:43.269121 2163332 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 10:04:43.277263 2163332 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 10:04:43.277283 2163332 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0804 10:04:43.277330 2163332 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 10:04:43.285660 2163332 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 10:04:43.286263 2163332 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-768931" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:43.286552 2163332 kubeconfig.go:62] /home/jenkins/minikube-integration/21223-1578987/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-768931" cluster setting kubeconfig missing "newest-cni-768931" context setting]
	I0804 10:04:43.286984 2163332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.288423 2163332 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 10:04:43.298821 2163332 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0804 10:04:43.298859 2163332 kubeadm.go:593] duration metric: took 21.569333ms to restartPrimaryControlPlane
	I0804 10:04:43.298870 2163332 kubeadm.go:394] duration metric: took 48.062594ms to StartCluster
	I0804 10:04:43.298890 2163332 settings.go:142] acquiring lock: {Name:mk3d97f9903fe59355ed92bb92489c9b9834574a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.298958 2163332 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:43.300110 2163332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.300900 2163332 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 10:04:43.300973 2163332 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 10:04:43.301073 2163332 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-768931"
	I0804 10:04:43.301106 2163332 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-768931"
	I0804 10:04:43.301136 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:43.301159 2163332 addons.go:69] Setting dashboard=true in profile "newest-cni-768931"
	I0804 10:04:43.301172 2163332 addons.go:238] Setting addon dashboard=true in "newest-cni-768931"
	W0804 10:04:43.301179 2163332 addons.go:247] addon dashboard should already be in state true
	I0804 10:04:43.301151 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.301204 2163332 addons.go:69] Setting default-storageclass=true in profile "newest-cni-768931"
	I0804 10:04:43.301216 2163332 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-768931"
	I0804 10:04:43.301196 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.301557 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.301866 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.302384 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.303179 2163332 out.go:177] * Verifying Kubernetes components...
	I0804 10:04:43.305197 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:43.324564 2163332 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 10:04:43.325432 2163332 addons.go:238] Setting addon default-storageclass=true in "newest-cni-768931"
	I0804 10:04:43.325477 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.325866 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.326227 2163332 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:43.326249 2163332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 10:04:43.326263 2163332 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0804 10:04:43.326303 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.330702 2163332 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0804 10:04:43.332193 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0804 10:04:43.332226 2163332 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0804 10:04:43.332289 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.352412 2163332 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:43.352439 2163332 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 10:04:43.352511 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.354098 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.357876 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.376872 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.566637 2163332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 10:04:43.579924 2163332 api_server.go:52] waiting for apiserver process to appear ...
	I0804 10:04:43.580007 2163332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 10:04:43.587036 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:43.661862 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:43.763049 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0804 10:04:43.763163 2163332 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0804 10:04:43.788243 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0804 10:04:43.788319 2163332 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W0804 10:04:43.865293 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.865365 2163332 retry.go:31] will retry after 305.419917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.872538 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0804 10:04:43.872570 2163332 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0804 10:04:43.875393 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.875428 2163332 retry.go:31] will retry after 145.860796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.893731 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0804 10:04:43.893755 2163332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0804 10:04:43.974563 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0804 10:04:43.974597 2163332 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0804 10:04:44.022021 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:44.068260 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0804 10:04:44.068309 2163332 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0804 10:04:44.080910 2163332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 10:04:44.164887 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0804 10:04:44.164970 2163332 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0804 10:04:44.171091 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:44.277704 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0804 10:04:44.277741 2163332 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0804 10:04:44.368026 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:44.368071 2163332 retry.go:31] will retry after 204.750775ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:44.368122 2163332 api_server.go:72] duration metric: took 1.067187806s to wait for apiserver process to appear ...
	I0804 10:04:44.368138 2163332 api_server.go:88] waiting for apiserver healthz status ...
	I0804 10:04:44.368158 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:44.368545 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:04:44.383288 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:04:44.383317 2163332 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0804 10:04:44.480138 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:04:44.573381 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:44.869120 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:49.869344 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:49.869418 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:54.871144 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:54.871202 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:59.871849 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:59.871895 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:04.478042 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (20.306910761s)
	W0804 10:05:04.478091 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.478126 2163332 retry.go:31] will retry after 410.995492ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.672813 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (20.192633915s)
	W0804 10:05:04.672867 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.672888 2163332 retry.go:31] will retry after 182.584114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.703068 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (20.129638597s)
	W0804 10:05:04.703115 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.703134 2163332 retry.go:31] will retry after 523.614331ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.856484 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:04.872959 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:04.873004 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:04.889864 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:05.192954 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:37594->192.168.76.2:8443: read: connection reset by peer
	I0804 10:05:05.227229 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:05:05.369063 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:05.369560 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:05.868214 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:05.868705 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:06.201020 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.344463633s)
	W0804 10:05:06.201082 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201113 2163332 retry.go:31] will retry after 482.284125ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201118 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.311218695s)
	W0804 10:05:06.201165 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:06.201186 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201211 2163332 retry.go:31] will retry after 887.479058ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201194 2163332 retry.go:31] will retry after 435.691438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.368292 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:06.368825 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:06.637302 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:06.683768 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:06.697149 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.697200 2163332 retry.go:31] will retry after 912.303037ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:06.737524 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.737566 2163332 retry.go:31] will retry after 625.926598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.868554 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:06.869018 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:07.089442 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:07.144156 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.144195 2163332 retry.go:31] will retry after 785.129731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.364509 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:07.368843 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:07.369217 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:07.420384 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.420426 2163332 retry.go:31] will retry after 1.204230636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.610548 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:07.663536 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.663566 2163332 retry.go:31] will retry after 847.493782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.868944 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:07.869396 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:07.929533 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:07.992350 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.992381 2163332 retry.go:31] will retry after 1.598370768s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.368829 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:08.369322 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:08.511490 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:08.563819 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.563859 2163332 retry.go:31] will retry after 2.394822068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.625020 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:08.680531 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.680572 2163332 retry.go:31] will retry after 1.418436203s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.868633 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:08.869103 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:09.368624 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:09.369142 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:09.591529 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:09.645331 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:09.645367 2163332 retry.go:31] will retry after 3.361261664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:09.868611 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:09.869088 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.099510 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:10.154439 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:10.154474 2163332 retry.go:31] will retry after 1.332951383s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:10.368786 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:10.369300 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.869015 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:10.869515 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.959750 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:11.011704 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.011736 2163332 retry.go:31] will retry after 3.283196074s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.369218 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:11.369738 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:11.487993 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:11.543582 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.543631 2163332 retry.go:31] will retry after 1.836854478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.869009 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:11.869527 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:12.369134 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:12.369608 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:12.868285 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:12.868757 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:13.007033 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:13.060825 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.060859 2163332 retry.go:31] will retry after 5.419314165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.368273 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:13.368846 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:13.381071 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:13.436653 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.436740 2163332 retry.go:31] will retry after 4.903205255s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.869165 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:13.869693 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.295170 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:14.348620 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:14.348654 2163332 retry.go:31] will retry after 3.265872015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:14.368685 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:14.369071 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.868586 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:14.869001 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:15.368516 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:15.368980 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:15.868561 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:15.869023 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:16.368523 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:16.368989 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:16.868494 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:16.868945 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:17.368464 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:17.368952 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:17.615361 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:17.669075 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:17.669112 2163332 retry.go:31] will retry after 4.169004534s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:17.868530 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:17.869032 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:18.340601 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:18.368999 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:18.369438 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:18.395142 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.395177 2163332 retry.go:31] will retry after 4.503631797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.480301 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:18.532269 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.532303 2163332 retry.go:31] will retry after 6.221358918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.868632 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:18.869050 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:19.368539 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:19.369007 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:19.868600 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:19.869064 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:20.368560 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:20.369023 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:20.868636 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:20.869103 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:21.368674 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:21.369151 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:21.838756 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:21.869088 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:21.869590 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:21.892280 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:21.892309 2163332 retry.go:31] will retry after 7.287119503s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:22.368833 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:22.369350 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:22.869045 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:22.869518 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:22.899745 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:22.973354 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:22.973440 2163332 retry.go:31] will retry after 5.491383729s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:23.368948 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:24.754708 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:05:28.370244 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:28.370296 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:28.465977 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:29.179675 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:33.371314 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:33.371380 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:38.372462 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:38.372528 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:43.373289 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:43.373454 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:44.936710 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (20.181960154s)
	W0804 10:05:44.936754 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52098->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.936774 2163332 retry.go:31] will retry after 12.603121969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52098->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939850 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (16.473803888s)
	I0804 10:05:44.939875 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (15.760161568s)
	W0804 10:05:44.939908 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52114->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:44.939909 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939927 2163332 ssh_runner.go:235] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (1.566452819s)
	I0804 10:05:44.939927 2163332 retry.go:31] will retry after 11.974707637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52114->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939942 2163332 retry.go:31] will retry after 10.364414585s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939952 2163332 logs.go:282] 2 containers: [649f5e5c295c 059756d38779]
	I0804 10:05:44.940008 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:44.959696 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:44.959763 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:44.981336 2163332 logs.go:282] 0 containers: []
	W0804 10:05:44.981364 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:44.981422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:45.001103 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:45.001170 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:45.019261 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.019295 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:45.019341 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:45.037700 2163332 logs.go:282] 2 containers: [69f71bfef17b e3a6308944b3]
	I0804 10:05:45.037776 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:45.055759 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.055792 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:45.055847 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:45.073894 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.073922 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:45.073935 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:45.073949 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:45.129417 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:45.122097    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.122637    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124224    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124675    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.126118    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:45.122097    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.122637    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124224    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124675    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.126118    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:45.129437 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:45.129450 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:45.156907 2163332 logs.go:123] Gathering logs for kube-apiserver [059756d38779] ...
	I0804 10:05:45.156940 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059756d38779"
	W0804 10:05:45.175729 2163332 logs.go:130] failed kube-apiserver [059756d38779]: command: /bin/bash -c "docker logs --tail 400 059756d38779" /bin/bash -c "docker logs --tail 400 059756d38779": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 059756d38779
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 059756d38779
	
	** /stderr **
	I0804 10:05:45.175748 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:45.175765 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:45.195944 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:45.195970 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:45.215671 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:45.215703 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:45.256918 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:45.256951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:45.283079 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:45.283122 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:45.318677 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:45.318712 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:45.370577 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:45.370621 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:45.391591 2163332 logs.go:123] Gathering logs for kube-controller-manager [e3a6308944b3] ...
	I0804 10:05:45.391616 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a6308944b3"
	I0804 10:05:45.412276 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:45.412300 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:47.962390 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:47.962840 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:47.962936 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:47.981464 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:47.981534 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:47.999231 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:47.999296 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:48.017739 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.017764 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:48.017806 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:48.036069 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:48.036151 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:48.053625 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.053651 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:48.053706 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:48.072069 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:48.072161 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:48.089963 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.089985 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:48.090033 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:48.107912 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.107934 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:48.107956 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:48.107972 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:48.164032 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:48.156591    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.157104    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.158718    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.159117    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.160609    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:48.156591    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.157104    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.158718    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.159117    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.160609    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:48.164052 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:48.164068 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:48.189481 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:48.189509 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:48.223302 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:48.223340 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:48.243043 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:48.243072 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:48.279568 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:48.279605 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:48.305730 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:48.305759 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:48.326737 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:48.326763 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:48.376057 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:48.376092 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:48.397266 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:48.397297 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:50.949382 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:50.949902 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:50.950009 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:50.969779 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:50.969854 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:50.988509 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:50.988586 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:51.006536 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.006565 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:51.006613 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:51.024853 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:51.024921 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:51.042617 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.042645 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:51.042689 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:51.060511 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:51.060599 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:51.079005 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.079031 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:51.079092 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:51.096451 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.096474 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:51.096489 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:51.096500 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:51.152017 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:51.152057 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:51.202478 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:51.202527 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:51.224042 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:51.224069 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:51.244633 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:51.244664 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:51.263948 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:51.263981 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:51.300099 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:51.300130 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:51.327538 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:51.327568 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:51.383029 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:51.375959    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.376437    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.377941    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.378408    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.379910    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:51.375959    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.376437    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.377941    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.378408    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.379910    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:51.383051 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:51.383067 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:51.408284 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:51.408314 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:53.941653 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:53.942148 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:53.942243 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:53.961471 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:53.961551 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:53.979438 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:53.979526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:53.997538 2163332 logs.go:282] 0 containers: []
	W0804 10:05:53.997559 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:53.997604 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:54.016326 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:54.016411 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:54.033583 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.033612 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:54.033663 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:54.051020 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:54.051103 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:54.068091 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.068118 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:54.068166 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:54.085797 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.085822 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:54.085842 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:54.085855 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:54.111832 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:54.111861 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:54.137672 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:54.137701 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:54.158028 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:54.158058 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:54.212546 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:54.212579 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:54.231855 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:54.231886 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:54.282575 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:54.282614 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:54.338570 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:54.331379    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.331842    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333378    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333781    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.335263    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:54.331379    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.331842    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333378    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333781    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.335263    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:54.338591 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:54.338604 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:54.373298 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:54.373329 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:54.393825 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:54.393848 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:55.304830 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:55.358381 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:55.358414 2163332 retry.go:31] will retry after 25.619477771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.915875 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:56.931223 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:56.931695 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:56.931788 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W0804 10:05:56.971520 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.971555 2163332 retry.go:31] will retry after 22.721182959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.971565 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:56.971637 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:56.989778 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:56.989869 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:57.007294 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.007316 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:57.007359 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:57.024882 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:57.024964 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:57.042858 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.042881 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:57.042935 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:57.061232 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:57.061331 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:57.078841 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.078870 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:57.078919 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:57.096724 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.096754 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:57.096778 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:57.096790 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:57.150588 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:57.150621 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:57.176804 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:57.176833 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:57.233732 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:57.225639    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.226657    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228215    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228620    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.230079    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:57.225639    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.226657    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228215    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228620    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.230079    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:57.233755 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:57.233768 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:57.270073 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:57.270109 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:57.290426 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:57.290461 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:57.327258 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:57.327286 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:57.353115 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:57.353143 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:57.373360 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:57.373392 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:57.423101 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:57.423133 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:57.540679 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:57.593367 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:57.593411 2163332 retry.go:31] will retry after 18.437511284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:59.945876 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:59.946354 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:59.946446 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:59.966005 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:59.966091 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:59.985617 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:59.985701 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:00.004828 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.004855 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:00.004906 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:00.023587 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:00.023651 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:00.041659 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.041680 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:00.041727 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:00.059493 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:00.059562 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:00.076712 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.076736 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:00.076779 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:00.095203 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.095222 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:00.095237 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:00.095248 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:00.113747 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:00.113775 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:00.150407 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:00.150433 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:00.202445 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:00.202486 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:00.229719 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:00.229755 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:00.255849 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:00.255878 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:00.276091 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:00.276119 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:00.297957 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:00.297986 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:00.353933 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:00.346687    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.347273    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.348805    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.349306    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.350820    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:00.346687    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.347273    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.348805    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.349306    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.350820    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:00.353953 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:00.353968 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:00.390814 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:00.390846 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:02.945900 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:02.946356 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:02.946453 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:02.965471 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:06:02.965535 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:02.983934 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:06:02.984001 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:03.002213 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.002237 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:03.002285 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:03.021772 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:03.021856 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:03.039529 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.039554 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:03.039612 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:03.057939 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:03.058004 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:03.076289 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.076310 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:03.076355 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:03.094117 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.094146 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:03.094167 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:03.094182 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:03.130756 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:03.130783 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:03.187120 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:03.179355    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.179917    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181530    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181944    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.183460    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:03.179355    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.179917    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181530    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181944    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.183460    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:03.187140 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:03.187153 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:03.207770 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:03.207804 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:03.244606 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:03.244642 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:03.295650 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:03.295686 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:03.351809 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:03.351844 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:03.379889 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:03.379922 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:03.406739 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:03.406767 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:03.427941 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:03.427967 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:05.948009 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:05.948483 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:05.948575 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:05.967373 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:06:05.967442 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:05.985899 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:06:05.985979 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:06.004170 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.004194 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:06.004250 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:06.022314 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:06.022386 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:06.039940 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.039963 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:06.040005 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:06.058068 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:06.058144 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:06.076569 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.076591 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:06.076631 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:06.094127 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.094153 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:06.094179 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:06.094193 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:06.119164 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:06.119195 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:06.140482 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:06.140517 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:06.190516 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:06.190551 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:06.212353 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:06.212385 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:06.248893 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:06.248919 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:06.302627 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:06.302664 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:06.329602 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:06.329633 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:06.385087 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:06.377651    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.378359    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.379718    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.380186    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.381710    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:06.377651    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.378359    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.379718    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.380186    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.381710    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:06.385113 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:06.385131 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:06.421810 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:06.421843 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:08.941210 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:13.941780 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:06:13.941906 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:13.960880 2163332 logs.go:282] 2 containers: [806e7ebaaed1 649f5e5c295c]
	I0804 10:06:13.960962 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:13.979358 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:13.979441 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:13.996946 2163332 logs.go:282] 0 containers: []
	W0804 10:06:13.996972 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:13.997025 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:14.015595 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:14.015668 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:14.034223 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.034246 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:14.034288 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:14.052124 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:14.052200 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:14.069965 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.069989 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:14.070032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:14.088436 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.088459 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:14.088473 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:14.088503 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:14.146648 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:14.146701 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:14.173008 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:14.173051 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 10:06:16.031588 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:06:19.693397 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:06:20.978525 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:06:28.857368 2163332 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (14.684287631s)
	W0804 10:06:28.857442 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:24.221601    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:06:28.850442    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49502->[::1]:8443: read: connection reset by peer"
	E0804 10:06:28.851023    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.852675    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.853078    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:24.221601    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:06:28.850442    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49502->[::1]:8443: read: connection reset by peer"
	E0804 10:06:28.851023    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.852675    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.853078    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:28.857455 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:28.857466 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:28.857477 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.825848081s)
	W0804 10:06:28.857515 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49512->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:06:28.857580 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.164140796s)
	W0804 10:06:28.857620 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:06:28.857662 2163332 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49512->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49512->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W0804 10:06:28.857709 2163332 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:06:28.857875 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.879306724s)
	W0804 10:06:28.857914 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:06:28.857989 2163332 out.go:270] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:06:28.860496 2163332 out.go:177] * Enabled addons: 
	I0804 10:06:28.861918 2163332 addons.go:514] duration metric: took 1m45.560958591s for enable addons: enabled=[]
	I0804 10:06:28.878501 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:28.878527 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:28.917388 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:28.917421 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:28.938499 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:28.938540 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:28.979902 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:28.979935 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:29.005867 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:29.005903 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	W0804 10:06:29.025877 2163332 logs.go:130] failed kube-apiserver [649f5e5c295c]: command: /bin/bash -c "docker logs --tail 400 649f5e5c295c" /bin/bash -c "docker logs --tail 400 649f5e5c295c": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 649f5e5c295c
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 649f5e5c295c
	
	** /stderr **
	I0804 10:06:29.025904 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:29.025916 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:29.076718 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:29.076759 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:31.597358 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:31.597799 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:31.597939 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:31.617008 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:31.617067 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:31.635937 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:31.636004 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:31.654450 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.654474 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:31.654531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:31.673162 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:31.673288 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:31.690681 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.690706 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:31.690759 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:31.712018 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:31.712111 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:31.729547 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.729576 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:31.729625 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:31.747479 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.747501 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:31.747513 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:31.747525 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:31.773882 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:31.773913 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:31.828620 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:31.821229    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.821688    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823253    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823731    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.825214    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:31.821229    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.821688    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823253    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823731    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.825214    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:31.828641 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:31.828655 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:31.854157 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:31.854190 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:31.873980 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:31.874004 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:31.910304 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:31.910342 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:31.931218 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:31.931246 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:31.969061 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:31.969091 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:32.019399 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:32.019436 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:32.040462 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:32.040488 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:32.059511 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:32.059540 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:34.622382 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:34.622843 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:34.622941 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:34.642832 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:34.642895 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:34.660588 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:34.660660 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:34.678855 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.678878 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:34.678922 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:34.698191 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:34.698282 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:34.716571 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.716593 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:34.716636 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:34.735252 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:34.735339 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:34.755152 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.755181 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:34.755230 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:34.773441 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.773472 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:34.773488 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:34.773500 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:34.793528 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:34.793556 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:34.812435 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:34.812465 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:34.837875 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:34.837905 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:34.858757 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:34.858786 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:34.878587 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:34.878614 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:34.916360 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:34.916391 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:34.982416 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:34.982452 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:35.039762 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:35.031976    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.032521    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034096    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034545    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.036090    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:35.031976    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.032521    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034096    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034545    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.036090    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:35.039782 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:35.039796 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:35.066299 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:35.066330 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:35.104670 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:35.104700 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:37.656360 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:37.656872 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:37.656969 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:37.675825 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:37.675894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:37.694962 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:37.695023 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:37.712658 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.712684 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:37.712735 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:37.730728 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:37.730800 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:37.748576 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.748598 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:37.748640 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:37.767923 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:37.768007 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:37.785275 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.785298 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:37.785347 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:37.801999 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.802024 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:37.802055 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:37.802067 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:37.839050 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:37.839076 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:37.907098 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:37.907134 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:37.962875 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:37.955444    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.955922    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957526    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957895    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.959476    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:37.955444    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.955922    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957526    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957895    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.959476    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:37.962896 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:37.962916 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:37.988976 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:37.989004 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:38.011096 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:38.011124 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:38.049631 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:38.049661 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:38.102092 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:38.102126 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:38.124479 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:38.124506 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:38.144973 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:38.145000 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:38.170919 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:38.170951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:40.690387 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:40.690843 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:40.690940 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:40.710160 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:40.710230 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:40.727856 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:40.727940 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:40.745578 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.745605 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:40.745648 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:40.763453 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:40.763516 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:40.781764 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.781788 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:40.781839 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:40.799938 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:40.800013 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:40.817161 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.817187 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:40.817260 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:40.835239 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.835260 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:40.835279 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:40.835293 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:40.855149 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:40.855177 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:40.922877 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:40.922913 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:40.978296 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:40.970913    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.971466    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973009    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973412    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.974964    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:40.970913    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.971466    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973009    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973412    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.974964    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:40.978318 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:40.978339 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:41.004175 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:41.004205 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:41.025025 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:41.025053 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:41.061373 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:41.061413 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:41.087250 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:41.087278 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:41.107920 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:41.107947 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:41.148907 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:41.148937 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:43.699853 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:43.700314 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:43.700416 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:43.719695 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:43.719771 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:43.738313 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:43.738403 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:43.756507 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.756531 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:43.756574 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:43.775263 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:43.775363 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:43.793071 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.793109 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:43.793177 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:43.811134 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:43.811231 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:43.828955 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.828978 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:43.829038 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:43.847773 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.847793 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:43.847819 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:43.847831 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:43.873624 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:43.873653 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:43.894310 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:43.894337 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:43.945563 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:43.945599 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:43.966435 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:43.966465 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:43.984864 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:43.984889 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:44.024156 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:44.024192 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:44.060624 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:44.060652 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:44.125956 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:44.125999 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:44.152471 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:44.152508 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:44.207960 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:44.200436    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.200919    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202422    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202839    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.204356    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:44.200436    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.200919    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202422    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202839    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.204356    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:46.709332 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:46.709781 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:46.709868 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:46.729464 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:46.729567 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:46.748548 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:46.748644 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:46.766962 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.766986 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:46.767041 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:46.786525 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:46.786603 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:46.804285 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.804311 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:46.804360 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:46.822116 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:46.822209 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:46.839501 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.839530 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:46.839575 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:46.856689 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.856711 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:46.856728 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:46.856739 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:46.895336 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:46.895370 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:46.946627 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:46.946659 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:46.967302 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:46.967329 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:46.985945 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:46.985972 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:47.022376 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:47.022405 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:47.077558 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:47.069979    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.070438    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072002    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072443    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.074016    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:47.069979    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.070438    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072002    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072443    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.074016    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:47.077593 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:47.077609 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:47.097426 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:47.097453 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:47.160540 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:47.160577 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:47.186584 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:47.186612 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:49.713880 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:49.714344 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:49.714431 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:49.732944 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:49.733002 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:49.751052 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:49.751129 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:49.769185 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.769207 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:49.769272 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:49.787184 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:49.787250 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:49.804791 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.804809 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:49.804849 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:49.823604 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:49.823673 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:49.840745 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.840766 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:49.840809 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:49.857681 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.857709 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:49.857729 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:49.857743 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:49.908402 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:49.908439 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:49.930280 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:49.930305 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:49.950867 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:49.950895 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:50.018519 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:50.018562 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:50.044619 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:50.044647 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:50.100753 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:50.092922    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.093459    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095094    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095578    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.097081    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:50.092922    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.093459    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095094    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095578    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.097081    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:50.100777 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:50.100793 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:50.125943 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:50.125970 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:50.146091 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:50.146117 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:50.181714 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:50.181742 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:52.721516 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:52.721956 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:52.722053 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:52.741758 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:52.741819 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:52.760560 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:52.760637 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:52.778049 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.778071 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:52.778133 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:52.796442 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:52.796515 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:52.813403 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.813433 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:52.813486 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:52.831370 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:52.831443 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:52.850355 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.850377 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:52.850418 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:52.868304 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.868329 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:52.868348 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:52.868362 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:52.909679 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:52.909712 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:52.959826 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:52.959860 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:52.980766 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:52.980792 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:53.000093 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:53.000123 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:53.066024 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:53.066063 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:53.122172 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:53.114825    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.115397    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.116943    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.117412    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.118938    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:53.114825    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.115397    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.116943    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.117412    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.118938    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:53.122200 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:53.122218 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:53.158613 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:53.158651 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:53.184392 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:53.184422 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:53.209845 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:53.209873 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:55.732938 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:55.733375 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:55.733476 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:55.752276 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:55.752356 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:55.770674 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:55.770750 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:55.788757 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.788778 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:55.788823 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:55.806924 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:55.806986 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:55.824084 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.824105 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:55.824163 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:55.842106 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:55.842195 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:55.859348 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.859376 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:55.859429 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:55.876943 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.876972 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:55.876990 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:55.877001 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:55.903338 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:55.903372 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:55.924802 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:55.924829 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:55.980125 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:55.972792    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.973342    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.974941    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.975429    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.976926    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:55.972792    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.973342    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.974941    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.975429    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.976926    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:55.980146 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:55.980161 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:56.000597 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:56.000622 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:56.037964 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:56.037996 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:56.088371 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:56.088407 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:56.107606 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:56.107634 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:56.143658 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:56.143689 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:56.211928 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:56.211963 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:58.738791 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:58.739253 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:58.739345 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:58.758672 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:58.758750 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:58.778125 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:58.778188 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:58.795601 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.795623 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:58.795675 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:58.814211 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:58.814275 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:58.831764 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.831790 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:58.831834 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:58.849466 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:58.849539 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:58.867398 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.867427 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:58.867484 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:58.885191 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.885215 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:58.885234 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:58.885262 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:58.911583 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:58.911610 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:58.950860 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:58.950893 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:59.004297 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:59.004333 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:59.025861 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:59.025889 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:59.046944 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:59.046973 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:59.085764 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:59.085794 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:59.158468 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:59.158508 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:59.184434 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:59.184462 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:59.239706 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:59.232043    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.232545    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234123    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234548    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.235973    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:59.232043    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.232545    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234123    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234548    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.235973    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:59.239735 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:59.239748 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:01.760780 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:01.761288 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:01.761386 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:01.781655 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:01.781741 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:01.799466 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:01.799533 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:01.817102 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.817126 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:01.817181 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:01.834957 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:01.835044 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:01.852872 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.852900 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:01.852951 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:01.870948 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:01.871014 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:01.890001 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.890026 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:01.890072 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:01.907730 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.907750 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:01.907767 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:01.907777 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:01.980222 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:01.980260 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:02.006847 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:02.006888 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:02.047297 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:02.047329 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:02.101227 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:02.101276 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:02.124099 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:02.124129 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:02.161273 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:02.161308 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:02.187147 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:02.187182 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:02.242852 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:02.235381    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.235858    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237451    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237924    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.239421    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:02.235381    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.235858    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237451    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237924    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.239421    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:02.242879 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:02.242893 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:02.264021 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:02.264048 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:04.785494 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:04.785952 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:04.786043 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:04.805356 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:04.805452 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:04.823966 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:04.824039 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:04.841949 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.841973 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:04.842019 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:04.859692 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:04.859761 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:04.877317 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.877341 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:04.877383 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:04.895958 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:04.896035 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:04.913348 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.913378 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:04.913426 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:04.931401 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.931427 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:04.931448 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:04.931461 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:04.951477 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:04.951507 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:05.001983 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:05.002019 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:05.023585 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:05.023619 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:05.044516 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:05.044549 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:05.113154 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:05.113195 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:05.170412 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:05.162898    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.163461    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165001    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165501    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.167026    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:05.162898    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.163461    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165001    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165501    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.167026    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:05.170434 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:05.170447 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:05.210151 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:05.210186 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:05.248755 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:05.248781 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:05.275317 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:05.275352 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:07.801587 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:07.802063 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:07.802166 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:07.821137 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:07.821214 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:07.839463 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:07.839532 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:07.856871 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.856893 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:07.856938 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:07.875060 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:07.875136 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:07.896448 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.896477 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:07.896537 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:07.914334 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:07.914402 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:07.931616 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.931638 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:07.931680 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:07.950247 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.950268 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:07.950285 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:07.950295 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:07.974572 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:07.974603 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:07.994800 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:07.994827 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:08.013535 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:08.013565 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:08.048711 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:08.048738 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:08.075000 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:08.075029 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:08.095656 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:08.095681 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:08.135706 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:08.135742 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:08.189749 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:08.189780 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:08.264988 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:08.265028 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:08.321799 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:08.314236    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.314718    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316206    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316648    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.318128    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:08.314236    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.314718    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316206    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316648    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.318128    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:10.822388 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:10.822855 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:10.822962 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:10.842220 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:10.842299 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:10.860390 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:10.860467 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:10.878544 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.878567 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:10.878613 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:10.897953 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:10.898016 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:10.916393 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.916419 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:10.916474 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:10.933957 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:10.934052 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:10.951873 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.951901 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:10.951957 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:10.970046 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.970073 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:10.970101 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:10.970116 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:11.026141 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:11.018729    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.019305    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.020844    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.021228    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.022826    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:11.018729    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.019305    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.020844    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.021228    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.022826    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:11.026162 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:11.026174 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:11.052155 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:11.052183 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:11.091637 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:11.091670 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:11.142651 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:11.142684 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:11.164003 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:11.164034 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:11.200186 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:11.200214 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:11.270805 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:11.270846 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:11.297260 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:11.297295 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:11.318423 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:11.318449 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:13.838395 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:13.838840 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:13.838937 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:13.858880 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:13.858955 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:13.877417 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:13.877476 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:13.895850 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.895876 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:13.895919 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:13.914237 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:13.914304 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:13.932185 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.932214 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:13.932265 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:13.949806 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:13.949876 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:13.966753 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.966779 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:13.966837 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:13.984061 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.984080 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:13.984103 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:13.984118 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:14.024518 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:14.024551 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:14.075810 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:14.075839 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:14.096801 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:14.096835 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:14.134271 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:14.134298 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:14.210356 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:14.210398 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:14.266888 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:14.259329    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.259828    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.261517    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.262045    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.263609    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:14.259329    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.259828    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.261517    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.262045    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.263609    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:14.266911 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:14.266931 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:14.286729 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:14.286765 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:14.312819 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:14.312853 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:14.339716 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:14.339746 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:16.861870 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:16.862360 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:16.862459 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:16.882051 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:16.882134 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:16.900321 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:16.900401 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:16.917983 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.918006 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:16.918057 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:16.935570 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:16.935650 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:16.953434 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.953455 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:16.953497 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:16.971207 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:16.971281 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:16.989882 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.989911 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:16.989957 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:17.006985 2163332 logs.go:282] 0 containers: []
	W0804 10:07:17.007007 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:17.007022 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:17.007034 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:17.081700 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:17.081741 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:17.107769 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:17.107798 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:17.129048 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:17.129074 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:17.170571 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:17.170601 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:17.190971 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:17.191000 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:17.227194 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:17.227225 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:17.283198 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:17.275311    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.275794    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277411    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277858    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.279344    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:17.275311    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.275794    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277411    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277858    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.279344    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:17.283220 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:17.283236 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:17.309760 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:17.309789 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:17.358841 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:17.358871 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:19.880139 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:19.880622 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:19.880709 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:19.901098 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:19.901189 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:19.921388 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:19.921455 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:19.941720 2163332 logs.go:282] 0 containers: []
	W0804 10:07:19.941751 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:19.941808 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:19.963719 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:19.963807 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:19.982285 2163332 logs.go:282] 0 containers: []
	W0804 10:07:19.982315 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:19.982375 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:20.005165 2163332 logs.go:282] 2 containers: [db8e2ca87b17 5321aae275b7]
	I0804 10:07:20.005272 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:20.024272 2163332 logs.go:282] 0 containers: []
	W0804 10:07:20.024296 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:20.024349 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:20.066617 2163332 logs.go:282] 0 containers: []
	W0804 10:07:20.066648 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:20.066662 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:20.066674 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 10:07:41.805018 2163332 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.738325489s)
	W0804 10:07:41.805054 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:30.119105    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:40.119975    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:41.799069    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:59078->[::1]:8443: read: connection reset by peer"
	E0804 10:07:41.799640    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:41.801276    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:30.119105    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:40.119975    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:41.799069    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:59078->[::1]:8443: read: connection reset by peer"
	E0804 10:07:41.799640    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:41.801276    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:41.805062 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:41.805073 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	W0804 10:07:41.824568 2163332 logs.go:130] failed etcd [62ad65a28324]: command: /bin/bash -c "docker logs --tail 400 62ad65a28324" /bin/bash -c "docker logs --tail 400 62ad65a28324": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 62ad65a28324
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 62ad65a28324
	
	** /stderr **
	I0804 10:07:41.824590 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:41.824606 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:41.866655 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:41.866687 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:41.918542 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:41.918580 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:41.940196 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:41.940228 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:41.980124 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:41.980151 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:07:41.999188 2163332 logs.go:130] failed kube-apiserver [806e7ebaaed1]: command: /bin/bash -c "docker logs --tail 400 806e7ebaaed1" /bin/bash -c "docker logs --tail 400 806e7ebaaed1": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 806e7ebaaed1
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 806e7ebaaed1
	
	** /stderr **
	I0804 10:07:41.999208 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:41.999222 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:42.021383 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:42.021413 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	W0804 10:07:42.040097 2163332 logs.go:130] failed kube-controller-manager [5321aae275b7]: command: /bin/bash -c "docker logs --tail 400 5321aae275b7" /bin/bash -c "docker logs --tail 400 5321aae275b7": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 5321aae275b7
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 5321aae275b7
	
	** /stderr **
	I0804 10:07:42.040121 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:42.040140 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:42.121467 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:42.121517 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:44.649035 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:44.649550 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:44.649655 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:44.668446 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:44.668531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:44.686095 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:44.686171 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:44.705643 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.705669 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:44.705736 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:44.724574 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:44.724643 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:44.743534 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.743556 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:44.743599 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:44.762338 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:44.762422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:44.782440 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.782464 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:44.782511 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:44.800457 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.800482 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:44.800503 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:44.800519 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:44.828987 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:44.829024 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:44.851349 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:44.851380 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:44.891887 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:44.891921 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:44.942771 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:44.942809 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:44.963910 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:44.963936 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:44.982991 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:44.983018 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:45.019697 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:45.019724 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:45.098143 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:45.098181 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:45.156899 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:45.149340    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.149889    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151529    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151954    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.153458    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:45.149340    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.149889    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151529    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151954    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.153458    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:45.156923 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:45.156936 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:47.685272 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:47.685730 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:47.685821 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:47.705698 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:47.705776 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:47.723486 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:47.723559 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:47.740254 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.740277 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:47.740328 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:47.758844 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:47.758912 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:47.776147 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.776169 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:47.776209 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:47.794049 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:47.794120 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:47.810872 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.810892 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:47.810933 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:47.828618 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.828639 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:47.828655 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:47.828665 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:47.884561 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:47.876612    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.877177    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.878713    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.879149    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.880641    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:47.876612    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.877177    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.878713    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.879149    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.880641    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:47.884591 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:47.884608 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:47.910602 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:47.910632 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:47.931635 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:47.931662 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:47.974664 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:47.974698 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:48.026673 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:48.026707 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:48.047596 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:48.047624 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:48.084322 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:48.084354 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:48.162716 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:48.162754 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:48.189072 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:48.189103 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:50.709307 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:50.709704 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:50.709797 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:50.728631 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:50.728711 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:50.747056 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:50.747128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:50.764837 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.764861 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:50.764907 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:50.783351 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:50.783422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:50.801048 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.801068 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:50.801112 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:50.819524 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:50.819605 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:50.837558 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.837583 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:50.837635 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:50.855272 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.855300 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:50.855315 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:50.855334 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:50.875612 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:50.875640 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:50.895850 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:50.895876 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:50.976003 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:50.976045 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:51.002688 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:51.002724 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:51.045612 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:51.045644 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:51.098299 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:51.098331 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:51.135309 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:51.135342 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:51.191580 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:51.183846    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.184481    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186082    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186483    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.188015    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:51.183846    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.184481    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186082    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186483    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.188015    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:51.191601 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:51.191615 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:51.218895 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:51.218923 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:53.739326 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:53.739815 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:53.739915 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:53.760078 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:53.760152 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:53.778771 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:53.778848 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:53.796996 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.797026 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:53.797075 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:53.815962 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:53.816032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:53.833919 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.833942 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:53.833991 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:53.852829 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:53.852894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:53.870544 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.870572 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:53.870620 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:53.888900 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.888923 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:53.888941 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:53.888954 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:53.909456 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:53.909482 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:53.959416 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:53.959451 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:53.979376 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:53.979406 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:54.015365 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:54.015393 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:54.092580 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:54.092627 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:54.119325 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:54.119436 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:54.178242 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:54.170338    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.171010    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172560    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172976    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.174509    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:54.170338    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.171010    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172560    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172976    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.174509    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:54.178266 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:54.178288 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:54.205571 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:54.205602 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:54.226781 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:54.226811 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:56.772513 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:56.773019 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:56.773137 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:56.792596 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:56.792666 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:56.810823 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:56.810896 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:56.828450 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.828480 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:56.828532 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:56.847167 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:56.847237 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:56.866291 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.866315 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:56.866358 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:56.884828 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:56.884907 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:56.905059 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.905088 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:56.905134 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:56.923381 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.923417 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:56.923435 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:56.923447 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:56.943931 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:56.943957 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:56.986803 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:56.986835 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:57.013326 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:57.013360 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:57.068200 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:57.060866    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.061398    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.062981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.063498    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.064981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:57.060866    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.061398    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.062981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.063498    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.064981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:57.068220 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:57.068232 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:57.093915 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:57.093943 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:57.144935 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:57.144969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:57.166788 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:57.166813 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:57.188225 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:57.188254 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:57.224405 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:57.224433 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:59.805597 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:59.806058 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:59.806152 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:59.824866 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:59.824944 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:59.843663 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:59.843753 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:59.861286 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.861306 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:59.861356 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:59.880494 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:59.880573 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:59.898827 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.898851 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:59.898894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:59.917517 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:59.917584 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:59.935879 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.935906 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:59.935963 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:59.954233 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.954264 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:59.954284 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:59.954302 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:59.980238 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:59.980271 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:00.037175 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:00.029528    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.030067    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.031620    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.032023    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.033553    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:00.029528    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.030067    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.031620    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.032023    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.033553    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:00.037200 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:00.037215 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:00.079854 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:00.079889 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:00.117813 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:00.117842 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:00.199625 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:00.199671 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:00.225938 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:00.225969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:00.246825 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:00.246857 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:00.300311 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:00.300362 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:00.322075 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:00.322105 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:02.842602 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:02.843031 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:02.843128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:02.862419 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:02.862503 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:02.881322 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:02.881409 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:02.902962 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.902986 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:02.903039 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:02.922238 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:02.922315 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:02.940312 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.940340 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:02.940391 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:02.960494 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:02.960580 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:02.978877 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.978915 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:02.978977 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:02.996894 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.996918 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:02.996937 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:02.996951 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:03.060369 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:03.060412 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:03.100294 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:03.100320 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:03.128232 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:03.128269 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:03.149215 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:03.149276 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:03.168809 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:03.168839 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:03.244969 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:03.245019 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:03.302519 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:03.294536    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.295054    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.296664    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.297129    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.298652    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:03.294536    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.295054    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.296664    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.297129    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.298652    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:03.302541 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:03.302555 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:03.328592 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:03.328621 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:03.349409 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:03.349436 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:05.892519 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:05.892926 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:05.893018 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:05.912863 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:05.912930 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:05.931765 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:05.931842 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:05.949624 2163332 logs.go:282] 0 containers: []
	W0804 10:08:05.949651 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:05.949706 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:05.969017 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:05.969096 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:05.987253 2163332 logs.go:282] 0 containers: []
	W0804 10:08:05.987279 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:05.987338 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:06.006096 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:06.006174 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:06.023866 2163332 logs.go:282] 0 containers: []
	W0804 10:08:06.023898 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:06.023955 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:06.041554 2163332 logs.go:282] 0 containers: []
	W0804 10:08:06.041574 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:06.041592 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:06.041603 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:06.078088 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:06.078114 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:06.160862 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:06.160907 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:06.187395 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:06.187425 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:06.243359 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:06.235931    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.236430    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.237921    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.238444    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.239969    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:06.235931    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.236430    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.237921    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.238444    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.239969    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:06.243387 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:06.243404 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:06.269689 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:06.269719 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:06.290404 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:06.290435 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:06.310595 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:06.310619 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:06.330304 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:06.330331 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:06.372930 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:06.372969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:08.923937 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:08.924354 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:08.924450 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:08.943688 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:08.943758 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:08.963008 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:08.963079 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:08.981372 2163332 logs.go:282] 0 containers: []
	W0804 10:08:08.981400 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:08.981453 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:08.999509 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:08.999592 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:09.017857 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.017881 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:09.017930 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:09.036581 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:09.036643 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:09.054584 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.054613 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:09.054666 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:09.072888 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.072924 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:09.072949 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:09.072965 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:09.149606 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:09.149645 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:09.178148 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:09.178185 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:09.222507 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:09.222544 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:09.275195 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:09.275235 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:09.299125 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:09.299159 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:09.319703 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:09.319747 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:09.346880 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:09.346922 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:09.404327 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:09.396630    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.397126    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.398704    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.399191    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.400813    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:09.396630    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.397126    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.398704    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.399191    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.400813    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:09.404352 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:09.404367 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:09.425425 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:09.425452 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:11.963472 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:11.963939 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:11.964032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:11.983012 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:11.983080 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:12.001567 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:12.001629 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:12.019335 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.019361 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:12.019428 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:12.038818 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:12.038893 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:12.056951 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.056978 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:12.057022 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:12.075232 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:12.075305 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:12.092737 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.092758 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:12.092800 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:12.109994 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.110024 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:12.110044 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:12.110055 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:12.166801 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:12.158687   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.159257   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.160910   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.161382   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.162961   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:12.158687   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.159257   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.160910   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.161382   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.162961   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:12.166825 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:12.166842 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:12.192505 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:12.192533 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:12.213260 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:12.213294 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:12.234230 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:12.234264 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:12.254032 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:12.254068 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:12.336496 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:12.336538 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:12.362829 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:12.362860 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:12.404783 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:12.404822 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:12.456932 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:12.456963 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:14.998006 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:14.998459 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:14.998558 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:15.018639 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:15.018726 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:15.037594 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:15.037664 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:15.055647 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.055675 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:15.055720 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:15.073464 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:15.073538 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:15.091563 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.091588 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:15.091636 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:15.110381 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:15.110457 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:15.128744 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.128766 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:15.128811 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:15.147315 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.147336 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:15.147350 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:15.147369 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:15.167872 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:15.167908 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:15.211657 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:15.211690 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:15.233001 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:15.233026 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:15.252541 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:15.252580 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:15.291017 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:15.291044 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:15.316967 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:15.317004 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:15.343514 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:15.343543 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:15.394164 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:15.394201 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:15.475808 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:15.475847 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:15.532790 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:15.525410   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.525962   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527526   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527890   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.529344   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:15.525410   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.525962   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527526   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527890   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.529344   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:18.033614 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:18.034099 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:18.034190 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:18.053426 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:18.053519 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:18.072396 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:18.072461 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:18.090428 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.090453 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:18.090519 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:18.109580 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:18.109661 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:18.127869 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.127899 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:18.127954 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:18.146622 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:18.146695 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:18.165973 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.165995 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:18.166038 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:18.183152 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.183175 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:18.183190 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:18.183204 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:18.239841 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:18.232099   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.232612   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234166   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234591   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.236113   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:18.232099   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.232612   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234166   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234591   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.236113   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:18.239862 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:18.239874 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:18.260920 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:18.260946 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:18.304135 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:18.304170 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:18.356641 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:18.356679 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:18.376311 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:18.376341 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:18.460920 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:18.460965 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:18.488725 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:18.488755 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:18.509858 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:18.509894 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:18.546219 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:18.546248 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:21.073317 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:21.073860 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:21.073971 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:21.093222 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:21.093346 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:21.111951 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:21.112042 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:21.130287 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.130308 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:21.130359 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:21.148384 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:21.148471 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:21.166576 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.166604 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:21.166652 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:21.185348 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:21.185427 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:21.203596 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.203622 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:21.203681 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:21.221592 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.221620 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:21.221640 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:21.221652 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:21.277441 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:21.269692   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.270305   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.271725   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.272213   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.273739   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:21.269692   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.270305   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.271725   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.272213   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.273739   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:21.277466 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:21.277482 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:21.298481 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:21.298511 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:21.350381 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:21.350418 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:21.371474 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:21.371501 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:21.408284 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:21.408313 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:21.485994 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:21.486031 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:21.512310 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:21.512339 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:21.539196 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:21.539228 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:21.581887 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:21.581920 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:24.102885 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:24.103356 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:24.103454 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:24.123078 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:24.123144 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:24.141483 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:24.141545 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:24.159538 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.159565 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:24.159610 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:24.177499 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:24.177574 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:24.195218 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.195246 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:24.195289 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:24.213410 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:24.213501 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:24.231595 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.231619 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:24.231675 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:24.250451 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.250478 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:24.250497 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:24.250511 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:24.269653 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:24.269681 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:24.348982 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:24.349027 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:24.405452 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:24.397972   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.398529   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400132   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400600   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.402109   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:24.397972   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.398529   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400132   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400600   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.402109   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:24.405476 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:24.405491 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:24.431565 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:24.431593 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:24.469920 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:24.469948 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:24.495911 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:24.495942 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:24.516767 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:24.516796 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:24.559809 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:24.559846 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:24.612215 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:24.612251 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:27.134399 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:27.134902 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:27.135016 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:27.154460 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:27.154526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:27.172467 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:27.172537 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:27.190547 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.190571 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:27.190626 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:27.208406 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:27.208478 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:27.226270 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.226293 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:27.226347 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:27.244648 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:27.244710 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:27.262363 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.262384 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:27.262429 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:27.280761 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.280791 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:27.280811 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:27.280828 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:27.337516 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:27.329752   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.330367   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.331865   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.332331   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.333862   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:27.329752   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.330367   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.331865   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.332331   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.333862   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:27.337538 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:27.337554 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:27.383205 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:27.383237 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:27.402831 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:27.402863 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:27.439987 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:27.440016 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:27.467188 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:27.467220 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:27.488626 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:27.488651 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:27.538307 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:27.538341 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:27.558848 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:27.558875 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:27.640317 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:27.640360 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:30.169015 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:30.169492 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:30.169591 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:30.188919 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:30.189000 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:30.208903 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:30.208986 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:30.226974 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.227006 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:30.227061 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:30.245555 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:30.245625 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:30.263987 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.264013 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:30.264059 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:30.282944 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:30.283023 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:30.301744 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.301773 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:30.301834 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:30.320893 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.320919 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:30.320936 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:30.320951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:30.397888 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:30.397925 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:30.418812 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:30.418837 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:30.464089 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:30.464123 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:30.484745 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:30.484778 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:30.504805 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:30.504837 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:30.530475 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:30.530511 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:30.586445 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:30.578622   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.579233   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.580788   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.581197   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.582760   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:30.578622   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.579233   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.580788   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.581197   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.582760   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:30.586465 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:30.586478 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:30.613024 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:30.613054 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:30.666024 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:30.666060 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:33.203579 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:33.204060 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:33.204180 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:33.223272 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:33.223341 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:33.242111 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:33.242191 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:33.260564 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.260587 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:33.260632 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:33.279120 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:33.279198 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:33.297558 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.297581 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:33.297626 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:33.315911 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:33.315987 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:33.334504 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.334534 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:33.334594 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:33.352831 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.352855 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:33.352876 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:33.352891 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:33.431146 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:33.431188 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:33.457483 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:33.457516 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:33.512587 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:33.505280   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.505794   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507387   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507829   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.509409   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:33.505280   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.505794   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507387   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507829   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.509409   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:33.512614 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:33.512630 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:33.563154 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:33.563186 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:33.584703 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:33.584730 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:33.603831 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:33.603862 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:33.641549 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:33.641579 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:33.667027 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:33.667056 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:33.688258 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:33.688291 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:36.234388 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:36.234842 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:36.234932 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:36.253452 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:36.253531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:36.272517 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:36.272578 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:36.290793 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.290815 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:36.290859 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:36.309868 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:36.309951 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:36.328038 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.328065 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:36.328128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:36.346447 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:36.346526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:36.364698 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.364720 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:36.364774 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:36.382618 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.382649 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:36.382672 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:36.382687 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:36.460757 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:36.460795 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:36.517181 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:36.509281   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.509826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511400   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.513375   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:36.509281   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.509826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511400   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.513375   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:36.517202 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:36.517218 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:36.570857 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:36.570896 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:36.590896 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:36.590929 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:36.616290 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:36.616323 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:36.643271 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:36.643298 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:36.663678 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:36.663704 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:36.708665 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:36.708695 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:36.729524 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:36.729551 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:39.267469 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:39.267990 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:39.268120 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:39.287780 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:39.287877 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:39.307153 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:39.307248 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:39.326719 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.326752 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:39.326810 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:39.345319 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:39.345387 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:39.363424 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.363455 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:39.363511 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:39.381746 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:39.381825 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:39.399785 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.399809 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:39.399862 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:39.419064 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.419095 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:39.419121 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:39.419136 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:39.501950 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:39.501998 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:39.528491 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:39.528525 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:39.585466 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:39.578061   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.578577   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580045   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580462   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.581949   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:39.578061   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.578577   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580045   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580462   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.581949   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:39.585497 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:39.585518 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:39.611559 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:39.611590 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:39.632402 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:39.632438 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:39.677721 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:39.677758 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:39.728453 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:39.728487 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:39.752029 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:39.752060 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:39.772376 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:39.772408 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:42.311175 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:42.311726 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:42.311836 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:42.331694 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:42.331761 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:42.350128 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:42.350202 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:42.368335 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.368358 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:42.368411 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:42.385942 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:42.386020 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:42.403768 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.403788 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:42.403840 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:42.422612 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:42.422679 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:42.439585 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.439609 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:42.439659 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:42.457208 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.457229 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:42.457263 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:42.457279 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:42.535545 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:42.535578 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:42.561612 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:42.561641 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:42.616811 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:42.609048   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.609673   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611215   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611642   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.613094   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:42.609048   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.609673   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611215   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611642   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.613094   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:42.616832 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:42.616847 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:42.643211 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:42.643240 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:42.663882 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:42.663910 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:42.683025 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:42.683052 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:42.722746 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:42.722772 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:42.743550 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:42.743589 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:42.788986 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:42.789023 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:45.340596 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:45.341080 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:45.343076 2163332 out.go:201] 
	W0804 10:08:45.344232 2163332 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0804 10:08:45.344248 2163332 out.go:270] * 
	* 
	W0804 10:08:45.346020 2163332 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 10:08:45.347852 2163332 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p newest-cni-768931 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-768931
helpers_test.go:235: (dbg) docker inspect newest-cni-768931:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd",
	        "Created": "2025-08-04T09:54:35.028106074Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2163578,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T10:04:32.896051547Z",
	            "FinishedAt": "2025-08-04T10:04:31.554642323Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/hostname",
	        "HostsPath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/hosts",
	        "LogPath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd-json.log",
	        "Name": "/newest-cni-768931",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-768931:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-768931",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd",
	                "LowerDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-768931",
	                "Source": "/var/lib/docker/volumes/newest-cni-768931/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-768931",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-768931",
	                "name.minikube.sigs.k8s.io": "newest-cni-768931",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d496a379f643afdf0008eeaa73490cdbbab104feff9921da81864e373d58ba90",
	            "SandboxKey": "/var/run/docker/netns/d496a379f643",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-768931": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:1d:38:75:59:39",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b469f2b8beae070883e49bfb67a442aa4bbac8703dfdd341c34c8d2ed3e42c07",
	                    "EndpointID": "349a5e6b8e6d705e3fe7a8f3cfcd94606e43e7038005d90f73899543e4f770f1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-768931",
	                        "056ddd51825a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-768931 -n newest-cni-768931
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-768931 -n newest-cni-768931: exit status 2 (270.914834ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-768931 logs -n 25
E0804 10:08:46.191053 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:08:46.481261 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:252: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                          ARGS                                                                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ ssh     │ -p kubenet-561540 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                                │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p bridge-561540 sudo crio config                                                                                                                                                                                                                      │ bridge-561540     │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                                 │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ delete  │ -p bridge-561540                                                                                                                                                                                                                                       │ bridge-561540     │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat docker --no-pager                                                                                                                                                                                                 │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                     │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo docker system info                                                                                                                                                                                                              │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                        │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                  │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cri-dockerd --version                                                                                                                                                                                                           │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat containerd --no-pager                                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                      │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /etc/containerd/config.toml                                                                                                                                                                                                 │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo containerd config dump                                                                                                                                                                                                          │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                   │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │                     │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat crio --no-pager                                                                                                                                                                                                   │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                         │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo crio config                                                                                                                                                                                                                     │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ delete  │ -p kubenet-561540                                                                                                                                                                                                                                      │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ stop    │ -p newest-cni-768931 --alsologtostderr -v=3                                                                                                                                                                                                            │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ addons  │ enable dashboard -p newest-cni-768931 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                           │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ start   │ -p newest-cni-768931 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0 │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 10:04:32
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 10:04:32.687485 2163332 out.go:345] Setting OutFile to fd 1 ...
	I0804 10:04:32.687601 2163332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 10:04:32.687610 2163332 out.go:358] Setting ErrFile to fd 2...
	I0804 10:04:32.687614 2163332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 10:04:32.687787 2163332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 10:04:32.688302 2163332 out.go:352] Setting JSON to false
	I0804 10:04:32.689384 2163332 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":153962,"bootTime":1754147911,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 10:04:32.689473 2163332 start.go:140] virtualization: kvm guest
	I0804 10:04:32.691276 2163332 out.go:177] * [newest-cni-768931] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 10:04:32.692852 2163332 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 10:04:32.692888 2163332 notify.go:220] Checking for updates...
	I0804 10:04:32.695015 2163332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 10:04:32.696142 2163332 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:32.697215 2163332 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 10:04:32.698321 2163332 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 10:04:32.699270 2163332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 10:04:32.700616 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:32.701052 2163332 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 10:04:32.723805 2163332 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 10:04:32.723883 2163332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 10:04:32.778232 2163332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 10:04:32.768372933 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 10:04:32.778341 2163332 docker.go:318] overlay module found
	I0804 10:04:32.779801 2163332 out.go:177] * Using the docker driver based on existing profile
	I0804 10:04:32.780788 2163332 start.go:304] selected driver: docker
	I0804 10:04:32.780822 2163332 start.go:918] validating driver "docker" against &{Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:32.780895 2163332 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 10:04:32.781839 2163332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 10:04:32.827839 2163332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 10:04:32.819484271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 10:04:32.828202 2163332 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0804 10:04:32.828229 2163332 cni.go:84] Creating CNI manager for ""
	I0804 10:04:32.828284 2163332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 10:04:32.828323 2163332 start.go:348] cluster config:
	{Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:32.830455 2163332 out.go:177] * Starting "newest-cni-768931" primary control-plane node in "newest-cni-768931" cluster
	I0804 10:04:32.831301 2163332 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 10:04:32.832264 2163332 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 10:04:32.833160 2163332 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 10:04:32.833198 2163332 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 10:04:32.833213 2163332 cache.go:56] Caching tarball of preloaded images
	I0804 10:04:32.833291 2163332 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 10:04:32.833335 2163332 preload.go:172] Found /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 10:04:32.833346 2163332 cache.go:59] Finished verifying existence of preloaded tar for v1.34.0-beta.0 on docker
	I0804 10:04:32.833466 2163332 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/config.json ...
	I0804 10:04:32.853043 2163332 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 10:04:32.853066 2163332 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 10:04:32.853089 2163332 cache.go:230] Successfully downloaded all kic artifacts
	I0804 10:04:32.853130 2163332 start.go:360] acquireMachinesLock for newest-cni-768931: {Name:mk60747b86b31a8b440009760f939cd98b70b1b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 10:04:32.853200 2163332 start.go:364] duration metric: took 46.728µs to acquireMachinesLock for "newest-cni-768931"
	I0804 10:04:32.853224 2163332 start.go:96] Skipping create...Using existing machine configuration
	I0804 10:04:32.853234 2163332 fix.go:54] fixHost starting: 
	I0804 10:04:32.853483 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:32.870192 2163332 fix.go:112] recreateIfNeeded on newest-cni-768931: state=Stopped err=<nil>
	W0804 10:04:32.870218 2163332 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 10:04:32.871722 2163332 out.go:177] * Restarting existing docker container for "newest-cni-768931" ...
	W0804 10:04:33.885027 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:04:32.872698 2163332 cli_runner.go:164] Run: docker start newest-cni-768931
	I0804 10:04:33.099718 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:33.118449 2163332 kic.go:430] container "newest-cni-768931" state is running.
	I0804 10:04:33.118905 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:33.137343 2163332 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/config.json ...
	I0804 10:04:33.137542 2163332 machine.go:93] provisionDockerMachine start ...
	I0804 10:04:33.137597 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:33.155160 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:33.155419 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:33.155437 2163332 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 10:04:33.156072 2163332 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58734->127.0.0.1:33169: read: connection reset by peer
	I0804 10:04:36.284896 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-768931
	
	I0804 10:04:36.284952 2163332 ubuntu.go:169] provisioning hostname "newest-cni-768931"
	I0804 10:04:36.285030 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.302808 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.303033 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.303047 2163332 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-768931 && echo "newest-cni-768931" | sudo tee /etc/hostname
	I0804 10:04:36.436070 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-768931
	
	I0804 10:04:36.436155 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.453360 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.453580 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.453597 2163332 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-768931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-768931/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-768931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 10:04:36.577177 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 10:04:36.577204 2163332 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 10:04:36.577269 2163332 ubuntu.go:177] setting up certificates
	I0804 10:04:36.577284 2163332 provision.go:84] configureAuth start
	I0804 10:04:36.577338 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:36.594945 2163332 provision.go:143] copyHostCerts
	I0804 10:04:36.595024 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 10:04:36.595052 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 10:04:36.595122 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 10:04:36.595229 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 10:04:36.595240 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 10:04:36.595279 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 10:04:36.595353 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 10:04:36.595363 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 10:04:36.595397 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 10:04:36.595465 2163332 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.newest-cni-768931 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-768931]
	I0804 10:04:36.675231 2163332 provision.go:177] copyRemoteCerts
	I0804 10:04:36.675299 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 10:04:36.675408 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.693281 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:36.786243 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 10:04:36.808201 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 10:04:36.829564 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 10:04:36.851320 2163332 provision.go:87] duration metric: took 274.022098ms to configureAuth
	I0804 10:04:36.851348 2163332 ubuntu.go:193] setting minikube options for container-runtime
	I0804 10:04:36.851551 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:36.851596 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.868506 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.868714 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.868725 2163332 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 10:04:36.993642 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 10:04:36.993669 2163332 ubuntu.go:71] root file system type: overlay
	I0804 10:04:36.993814 2163332 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 10:04:36.993894 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.011512 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:37.011804 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:37.011909 2163332 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 10:04:37.144143 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 10:04:37.144254 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.163872 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:37.164133 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:37.164159 2163332 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 10:04:37.294409 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 10:04:37.294438 2163332 machine.go:96] duration metric: took 4.156880869s to provisionDockerMachine
	I0804 10:04:37.294451 2163332 start.go:293] postStartSetup for "newest-cni-768931" (driver="docker")
	I0804 10:04:37.294467 2163332 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 10:04:37.294538 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 10:04:37.294594 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.312083 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.402431 2163332 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 10:04:37.405677 2163332 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 10:04:37.405711 2163332 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 10:04:37.405722 2163332 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 10:04:37.405732 2163332 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 10:04:37.405748 2163332 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 10:04:37.405809 2163332 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 10:04:37.405901 2163332 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 10:04:37.406013 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 10:04:37.414129 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 10:04:37.436137 2163332 start.go:296] duration metric: took 141.67054ms for postStartSetup
	I0804 10:04:37.436224 2163332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 10:04:37.436265 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.453687 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.541885 2163332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 10:04:37.546057 2163332 fix.go:56] duration metric: took 4.692814355s for fixHost
	I0804 10:04:37.546084 2163332 start.go:83] releasing machines lock for "newest-cni-768931", held for 4.692869693s
	I0804 10:04:37.546159 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:37.563070 2163332 ssh_runner.go:195] Run: cat /version.json
	I0804 10:04:37.563126 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.563138 2163332 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 10:04:37.563203 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.580936 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.581156 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.740866 2163332 ssh_runner.go:195] Run: systemctl --version
	I0804 10:04:37.745223 2163332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 10:04:37.749326 2163332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 10:04:37.766095 2163332 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 10:04:37.766176 2163332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 10:04:37.773788 2163332 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 10:04:37.773820 2163332 start.go:495] detecting cgroup driver to use...
	I0804 10:04:37.773849 2163332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 10:04:37.773948 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 10:04:37.788117 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:38.201785 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 10:04:38.211955 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 10:04:38.221176 2163332 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 10:04:38.221223 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 10:04:38.230298 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 10:04:38.238908 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 10:04:38.247614 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 10:04:38.256328 2163332 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 10:04:38.264446 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 10:04:38.273173 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 10:04:38.282132 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 10:04:38.290867 2163332 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 10:04:38.298323 2163332 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 10:04:38.305902 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:38.392109 2163332 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 10:04:38.481905 2163332 start.go:495] detecting cgroup driver to use...
	I0804 10:04:38.481959 2163332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 10:04:38.482006 2163332 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 10:04:38.492886 2163332 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 10:04:38.492964 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 10:04:38.507193 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 10:04:38.524383 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:38.965725 2163332 ssh_runner.go:195] Run: which cri-dockerd
	I0804 10:04:38.969614 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 10:04:38.977908 2163332 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 10:04:38.993935 2163332 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 10:04:39.070708 2163332 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 10:04:39.151070 2163332 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 10:04:39.151179 2163332 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 10:04:39.167734 2163332 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 10:04:39.179347 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.254327 2163332 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 10:04:39.556127 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 10:04:39.566948 2163332 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0804 10:04:39.577711 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 10:04:39.587256 2163332 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 10:04:39.666843 2163332 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 10:04:39.760652 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.840823 2163332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 10:04:39.853363 2163332 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 10:04:39.863091 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.939093 2163332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 10:04:39.998099 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 10:04:40.009070 2163332 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 10:04:40.009141 2163332 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 10:04:40.012496 2163332 start.go:563] Will wait 60s for crictl version
	I0804 10:04:40.012547 2163332 ssh_runner.go:195] Run: which crictl
	I0804 10:04:40.015480 2163332 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 10:04:40.047607 2163332 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 10:04:40.047667 2163332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 10:04:40.071117 2163332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 10:04:40.096346 2163332 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 10:04:40.096430 2163332 cli_runner.go:164] Run: docker network inspect newest-cni-768931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 10:04:40.113799 2163332 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0804 10:04:40.117316 2163332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 10:04:40.128718 2163332 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0804 10:04:40.129838 2163332 kubeadm.go:875] updating cluster {Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 10:04:40.130050 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:40.510582 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:40.900777 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:41.302831 2163332 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 10:04:41.303034 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:41.705389 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:42.114511 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:42.516831 2163332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 10:04:42.537600 2163332 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 10:04:42.537629 2163332 docker.go:633] Images already preloaded, skipping extraction
	I0804 10:04:42.537693 2163332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 10:04:42.556805 2163332 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 10:04:42.556830 2163332 cache_images.go:85] Images are preloaded, skipping loading
	I0804 10:04:42.556843 2163332 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0-beta.0 docker true true} ...
	I0804 10:04:42.556981 2163332 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-768931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 10:04:42.557048 2163332 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 10:04:42.603960 2163332 cni.go:84] Creating CNI manager for ""
	I0804 10:04:42.603991 2163332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 10:04:42.604000 2163332 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0804 10:04:42.604024 2163332 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-768931 NodeName:newest-cni-768931 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 10:04:42.604182 2163332 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-768931"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 10:04:42.604258 2163332 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 10:04:42.612607 2163332 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 10:04:42.612659 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 10:04:42.620777 2163332 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0804 10:04:42.637111 2163332 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 10:04:42.652929 2163332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2300 bytes)
	I0804 10:04:42.669016 2163332 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0804 10:04:42.672189 2163332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 10:04:42.681993 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:42.752820 2163332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 10:04:42.766032 2163332 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931 for IP: 192.168.76.2
	I0804 10:04:42.766057 2163332 certs.go:194] generating shared ca certs ...
	I0804 10:04:42.766079 2163332 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:42.766266 2163332 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 10:04:42.766336 2163332 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 10:04:42.766352 2163332 certs.go:256] generating profile certs ...
	I0804 10:04:42.766461 2163332 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/client.key
	I0804 10:04:42.766532 2163332 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key.a5c16e02
	I0804 10:04:42.766586 2163332 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.key
	I0804 10:04:42.766711 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 10:04:42.766752 2163332 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 10:04:42.766766 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 10:04:42.766803 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 10:04:42.766837 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 10:04:42.766912 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 10:04:42.766983 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 10:04:42.767635 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 10:04:42.790829 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 10:04:42.814436 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 10:04:42.873985 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 10:04:42.962257 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 10:04:42.987204 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 10:04:43.010504 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 10:04:43.032579 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 10:04:43.054052 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 10:04:43.074805 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 10:04:43.095457 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 10:04:43.116289 2163332 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 10:04:43.132026 2163332 ssh_runner.go:195] Run: openssl version
	I0804 10:04:43.137020 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 10:04:43.145170 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.148316 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.148363 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.154461 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 10:04:43.162454 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 10:04:43.170868 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.174158 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.174205 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.180335 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 10:04:43.188046 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 10:04:43.196142 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.199374 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.199418 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.205534 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 10:04:43.213018 2163332 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 10:04:43.215961 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 10:04:43.221714 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 10:04:43.227380 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 10:04:43.233506 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 10:04:43.239207 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 10:04:43.245036 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 10:04:43.250834 2163332 kubeadm.go:392] StartCluster: {Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:43.250956 2163332 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 10:04:43.269121 2163332 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 10:04:43.277263 2163332 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 10:04:43.277283 2163332 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0804 10:04:43.277330 2163332 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 10:04:43.285660 2163332 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 10:04:43.286263 2163332 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-768931" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:43.286552 2163332 kubeconfig.go:62] /home/jenkins/minikube-integration/21223-1578987/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-768931" cluster setting kubeconfig missing "newest-cni-768931" context setting]
	I0804 10:04:43.286984 2163332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.288423 2163332 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 10:04:43.298821 2163332 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0804 10:04:43.298859 2163332 kubeadm.go:593] duration metric: took 21.569333ms to restartPrimaryControlPlane
	I0804 10:04:43.298870 2163332 kubeadm.go:394] duration metric: took 48.062594ms to StartCluster
	I0804 10:04:43.298890 2163332 settings.go:142] acquiring lock: {Name:mk3d97f9903fe59355ed92bb92489c9b9834574a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.298958 2163332 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:43.300110 2163332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.300900 2163332 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 10:04:43.300973 2163332 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 10:04:43.301073 2163332 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-768931"
	I0804 10:04:43.301106 2163332 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-768931"
	I0804 10:04:43.301136 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:43.301159 2163332 addons.go:69] Setting dashboard=true in profile "newest-cni-768931"
	I0804 10:04:43.301172 2163332 addons.go:238] Setting addon dashboard=true in "newest-cni-768931"
	W0804 10:04:43.301179 2163332 addons.go:247] addon dashboard should already be in state true
	I0804 10:04:43.301151 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.301204 2163332 addons.go:69] Setting default-storageclass=true in profile "newest-cni-768931"
	I0804 10:04:43.301216 2163332 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-768931"
	I0804 10:04:43.301196 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.301557 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.301866 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.302384 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.303179 2163332 out.go:177] * Verifying Kubernetes components...
	I0804 10:04:43.305197 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:43.324564 2163332 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 10:04:43.325432 2163332 addons.go:238] Setting addon default-storageclass=true in "newest-cni-768931"
	I0804 10:04:43.325477 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.325866 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.326227 2163332 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:43.326249 2163332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 10:04:43.326263 2163332 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0804 10:04:43.326303 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.330702 2163332 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	W0804 10:04:43.886614 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:04:43.332193 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0804 10:04:43.332226 2163332 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0804 10:04:43.332289 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.352412 2163332 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:43.352439 2163332 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 10:04:43.352511 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.354098 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.357876 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.376872 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.566637 2163332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 10:04:43.579924 2163332 api_server.go:52] waiting for apiserver process to appear ...
	I0804 10:04:43.580007 2163332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 10:04:43.587036 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:43.661862 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:43.763049 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0804 10:04:43.763163 2163332 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0804 10:04:43.788243 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0804 10:04:43.788319 2163332 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W0804 10:04:43.865293 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.865365 2163332 retry.go:31] will retry after 305.419917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.872538 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0804 10:04:43.872570 2163332 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0804 10:04:43.875393 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.875428 2163332 retry.go:31] will retry after 145.860796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.893731 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0804 10:04:43.893755 2163332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0804 10:04:43.974563 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0804 10:04:43.974597 2163332 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0804 10:04:44.022021 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:44.068260 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0804 10:04:44.068309 2163332 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0804 10:04:44.080910 2163332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 10:04:44.164887 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0804 10:04:44.164970 2163332 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0804 10:04:44.171091 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:44.277704 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0804 10:04:44.277741 2163332 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0804 10:04:44.368026 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:44.368071 2163332 retry.go:31] will retry after 204.750775ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:44.368122 2163332 api_server.go:72] duration metric: took 1.067187806s to wait for apiserver process to appear ...
	I0804 10:04:44.368138 2163332 api_server.go:88] waiting for apiserver healthz status ...
	I0804 10:04:44.368158 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:44.368545 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:04:44.383288 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:04:44.383317 2163332 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0804 10:04:44.480138 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:04:44.573381 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:44.869120 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:45.817807 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (21.02485888s)
	W0804 10:04:45.817865 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47830->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817882 2149628 retry.go:31] will retry after 7.331884675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47830->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817886 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (18.577242103s)
	W0804 10:04:45.817921 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47842->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817941 2149628 retry.go:31] will retry after 8.626487085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47842->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.819147 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (15.673641591s)
	W0804 10:04:45.819203 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.819221 2149628 retry.go:31] will retry after 10.775617277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:46.383837 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:04:48.883614 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:49.869344 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:49.869418 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:04:51.383255 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:53.150556 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:04:53.202901 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:53.202938 2149628 retry.go:31] will retry after 10.556999875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:53.383788 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:54.445142 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:04:54.496071 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:54.496106 2149628 retry.go:31] will retry after 19.784775984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:55.384040 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:54.871144 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:54.871202 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:56.595610 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:04:56.648210 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:56.648246 2149628 retry.go:31] will retry after 19.28607151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:57.883186 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:04:59.883484 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:59.871849 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:59.871895 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:05:02.383555 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:03.761004 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:03.814105 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:03.814138 2149628 retry.go:31] will retry after 18.372442886s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:04.883286 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:04.478042 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (20.306910761s)
	W0804 10:05:04.478091 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.478126 2163332 retry.go:31] will retry after 410.995492ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.672813 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (20.192633915s)
	W0804 10:05:04.672867 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.672888 2163332 retry.go:31] will retry after 182.584114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.703068 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (20.129638597s)
	W0804 10:05:04.703115 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.703134 2163332 retry.go:31] will retry after 523.614331ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.856484 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:04.872959 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:04.873004 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:04.889864 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:05.192954 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:37594->192.168.76.2:8443: read: connection reset by peer
	I0804 10:05:05.227229 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:05:05.369063 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:05.369560 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:05.868214 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:05.868705 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:06.201020 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.344463633s)
	W0804 10:05:06.201082 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201113 2163332 retry.go:31] will retry after 482.284125ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201118 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.311218695s)
	W0804 10:05:06.201165 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:06.201186 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201211 2163332 retry.go:31] will retry after 887.479058ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201194 2163332 retry.go:31] will retry after 435.691438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.368292 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:06.368825 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:06.637302 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:06.683768 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:06.697149 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.697200 2163332 retry.go:31] will retry after 912.303037ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:06.737524 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.737566 2163332 retry.go:31] will retry after 625.926598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.868554 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:06.869018 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:07.089442 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:07.144156 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.144195 2163332 retry.go:31] will retry after 785.129731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.364509 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:07.368843 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:07.369217 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:07.420384 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.420426 2163332 retry.go:31] will retry after 1.204230636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.610548 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:07.663536 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.663566 2163332 retry.go:31] will retry after 847.493782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:07.384053 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:07.868944 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:07.869396 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:07.929533 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:07.992350 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.992381 2163332 retry.go:31] will retry after 1.598370768s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.368829 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:08.369322 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:08.511490 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:08.563819 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.563859 2163332 retry.go:31] will retry after 2.394822068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.625020 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:08.680531 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.680572 2163332 retry.go:31] will retry after 1.418436203s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.868633 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:08.869103 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:09.368624 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:09.369142 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:09.591529 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:09.645331 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:09.645367 2163332 retry.go:31] will retry after 3.361261664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:09.868611 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:09.869088 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.099510 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:10.154439 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:10.154474 2163332 retry.go:31] will retry after 1.332951383s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:10.368786 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:10.369300 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.869015 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:10.869515 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.959750 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:11.011704 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.011736 2163332 retry.go:31] will retry after 3.283196074s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.369218 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:11.369738 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:11.487993 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:11.543582 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.543631 2163332 retry.go:31] will retry after 1.836854478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.869009 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:11.869527 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:12.369134 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:12.369608 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.284114 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:05:12.868285 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:12.868757 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:13.007033 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:13.060825 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.060859 2163332 retry.go:31] will retry after 5.419314165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.368273 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:13.368846 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:13.381071 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:13.436653 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.436740 2163332 retry.go:31] will retry after 4.903205255s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.869165 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:13.869693 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.295170 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:14.348620 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:14.348654 2163332 retry.go:31] will retry after 3.265872015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:14.368685 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:14.369071 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.868586 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:14.869001 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:15.368516 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:15.368980 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:15.868561 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:15.869023 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:16.368523 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:16.368989 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:16.868494 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:16.868945 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:17.368464 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:17.368952 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:17.615361 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:17.669075 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:17.669112 2163332 retry.go:31] will retry after 4.169004534s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:15.935132 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:17.885492 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:05:17.868530 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:17.869032 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:18.340601 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:18.368999 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:18.369438 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:18.395142 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.395177 2163332 retry.go:31] will retry after 4.503631797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.480301 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:18.532269 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.532303 2163332 retry.go:31] will retry after 6.221358918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.868632 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:18.869050 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:19.368539 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:19.369007 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:19.868600 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:19.869064 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:20.368560 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:20.369023 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:20.868636 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:20.869103 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:21.368674 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:21.369151 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:21.838756 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:21.869088 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:21.869590 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:21.892280 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:21.892309 2163332 retry.go:31] will retry after 7.287119503s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:22.368833 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:22.369350 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:22.187953 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:22.869045 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:22.869518 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:22.899745 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:22.973354 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:22.973440 2163332 retry.go:31] will retry after 5.491383729s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:23.368948 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:24.754708 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:27.887543 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:05:29.439408 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (15.15524051s)
	W0804 10:05:29.439455 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45456->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:29.439566 2149628 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45456->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:05:29.441507 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (13.506331682s)
	W0804 10:05:29.441560 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:29.441583 2149628 retry.go:31] will retry after 14.271169565s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:29.441585 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.253590877s)
	W0804 10:05:29.441617 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:29.441700 2149628 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W0804 10:05:30.383305 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:28.370244 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:28.370296 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:28.465977 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:29.179675 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:32.383952 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:34.883276 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:33.371314 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:33.371380 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:05:36.883454 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:38.883897 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:38.372462 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:38.372528 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:05:41.383199 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:43.713667 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:43.766398 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:43.766528 2149628 out.go:270] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:05:43.769126 2149628 out.go:177] * Enabled addons: 
	I0804 10:05:43.770026 2149628 addons.go:514] duration metric: took 1m58.647363457s for enable addons: enabled=[]
	W0804 10:05:43.883892 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:43.373289 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:43.373454 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:44.936710 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (20.181960154s)
	W0804 10:05:44.936754 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52098->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.936774 2163332 retry.go:31] will retry after 12.603121969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52098->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939850 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (16.473803888s)
	I0804 10:05:44.939875 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (15.760161568s)
	W0804 10:05:44.939908 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52114->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:44.939909 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939927 2163332 ssh_runner.go:235] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (1.566452819s)
	I0804 10:05:44.939927 2163332 retry.go:31] will retry after 11.974707637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52114->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939942 2163332 retry.go:31] will retry after 10.364414585s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939952 2163332 logs.go:282] 2 containers: [649f5e5c295c 059756d38779]
	I0804 10:05:44.940008 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:44.959696 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:44.959763 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:44.981336 2163332 logs.go:282] 0 containers: []
	W0804 10:05:44.981364 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:44.981422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:45.001103 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:45.001170 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:45.019261 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.019295 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:45.019341 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:45.037700 2163332 logs.go:282] 2 containers: [69f71bfef17b e3a6308944b3]
	I0804 10:05:45.037776 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:45.055759 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.055792 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:45.055847 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:45.073894 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.073922 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:45.073935 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:45.073949 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:45.129417 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:45.122097    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.122637    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124224    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124675    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.126118    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:45.122097    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.122637    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124224    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124675    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.126118    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:45.129437 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:45.129450 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:45.156907 2163332 logs.go:123] Gathering logs for kube-apiserver [059756d38779] ...
	I0804 10:05:45.156940 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059756d38779"
	W0804 10:05:45.175729 2163332 logs.go:130] failed kube-apiserver [059756d38779]: command: /bin/bash -c "docker logs --tail 400 059756d38779" /bin/bash -c "docker logs --tail 400 059756d38779": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 059756d38779
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 059756d38779
	
	** /stderr **
	I0804 10:05:45.175748 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:45.175765 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:45.195944 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:45.195970 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:45.215671 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:45.215703 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:45.256918 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:45.256951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:45.283079 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:45.283122 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:45.318677 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:45.318712 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:45.370577 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:45.370621 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:45.391591 2163332 logs.go:123] Gathering logs for kube-controller-manager [e3a6308944b3] ...
	I0804 10:05:45.391616 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a6308944b3"
	I0804 10:05:45.412276 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:45.412300 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 10:05:46.384002 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:48.883850 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:47.962390 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:47.962840 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:47.962936 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:47.981464 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:47.981534 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:47.999231 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:47.999296 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:48.017739 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.017764 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:48.017806 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:48.036069 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:48.036151 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:48.053625 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.053651 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:48.053706 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:48.072069 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:48.072161 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:48.089963 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.089985 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:48.090033 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:48.107912 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.107934 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:48.107956 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:48.107972 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:48.164032 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:48.156591    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.157104    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.158718    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.159117    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.160609    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:48.156591    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.157104    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.158718    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.159117    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.160609    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:48.164052 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:48.164068 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:48.189481 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:48.189509 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:48.223302 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:48.223340 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:48.243043 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:48.243072 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:48.279568 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:48.279605 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:48.305730 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:48.305759 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:48.326737 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:48.326763 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:48.376057 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:48.376092 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:48.397266 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:48.397297 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:50.949382 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:50.949902 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:50.950009 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:50.969779 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:50.969854 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:50.988509 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:50.988586 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:51.006536 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.006565 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:51.006613 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:51.024853 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:51.024921 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:51.042617 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.042645 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:51.042689 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:51.060511 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:51.060599 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:51.079005 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.079031 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:51.079092 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:51.096451 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.096474 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:51.096489 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:51.096500 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:51.152017 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:51.152057 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:51.202478 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:51.202527 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:51.224042 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:51.224069 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:51.244633 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:51.244664 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:51.263948 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:51.263981 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:51.300099 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:51.300130 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:51.327538 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:51.327568 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:51.383029 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:51.375959    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.376437    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.377941    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.378408    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.379910    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:51.375959    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.376437    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.377941    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.378408    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.379910    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:51.383051 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:51.383067 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:51.408284 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:51.408314 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	W0804 10:05:51.384023 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:53.883929 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:53.941653 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:53.942148 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:53.942243 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:53.961471 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:53.961551 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:53.979438 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:53.979526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:53.997538 2163332 logs.go:282] 0 containers: []
	W0804 10:05:53.997559 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:53.997604 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:54.016326 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:54.016411 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:54.033583 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.033612 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:54.033663 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:54.051020 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:54.051103 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:54.068091 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.068118 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:54.068166 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:54.085797 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.085822 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:54.085842 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:54.085855 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:54.111832 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:54.111861 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:54.137672 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:54.137701 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:54.158028 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:54.158058 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:54.212546 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:54.212579 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:54.231855 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:54.231886 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:54.282575 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:54.282614 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:54.338570 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:54.331379    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.331842    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333378    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333781    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.335263    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:54.331379    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.331842    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333378    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333781    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.335263    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:54.338591 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:54.338604 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:54.373298 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:54.373329 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:54.393825 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:54.393848 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:55.304830 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:55.358381 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:55.358414 2163332 retry.go:31] will retry after 25.619477771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.915875 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:56.931223 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:56.931695 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:56.931788 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W0804 10:05:56.971520 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.971555 2163332 retry.go:31] will retry after 22.721182959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.971565 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:56.971637 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:56.989778 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:56.989869 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:57.007294 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.007316 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:57.007359 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:57.024882 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:57.024964 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:57.042858 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.042881 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:57.042935 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:57.061232 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:57.061331 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:57.078841 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.078870 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:57.078919 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:57.096724 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.096754 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:57.096778 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:57.096790 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:57.150588 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:57.150621 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:57.176804 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:57.176833 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:57.233732 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:57.225639    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.226657    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228215    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228620    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.230079    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:57.225639    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.226657    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228215    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228620    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.230079    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:57.233755 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:57.233768 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:57.270073 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:57.270109 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:57.290426 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:57.290461 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:57.327258 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:57.327286 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:57.353115 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:57.353143 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:57.373360 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:57.373392 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:57.423101 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:57.423133 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:57.540679 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:57.593367 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:57.593411 2163332 retry.go:31] will retry after 18.437511284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:55.884024 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:58.383443 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:59.945876 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:59.946354 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:59.946446 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:59.966005 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:59.966091 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:59.985617 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:59.985701 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:00.004828 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.004855 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:00.004906 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:00.023587 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:00.023651 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:00.041659 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.041680 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:00.041727 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:00.059493 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:00.059562 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:00.076712 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.076736 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:00.076779 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:00.095203 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.095222 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:00.095237 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:00.095248 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:00.113747 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:00.113775 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:00.150407 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:00.150433 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:00.202445 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:00.202486 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:00.229719 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:00.229755 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:00.255849 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:00.255878 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:00.276091 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:00.276119 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:00.297957 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:00.297986 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:00.353933 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:00.346687    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.347273    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.348805    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.349306    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.350820    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:00.346687    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.347273    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.348805    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.349306    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.350820    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:00.353953 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:00.353968 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:00.390814 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:00.390846 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	W0804 10:06:00.883216 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:03.383100 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:05.383181 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:02.945900 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:02.946356 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:02.946453 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:02.965471 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:06:02.965535 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:02.983934 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:06:02.984001 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:03.002213 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.002237 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:03.002285 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:03.021772 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:03.021856 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:03.039529 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.039554 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:03.039612 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:03.057939 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:03.058004 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:03.076289 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.076310 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:03.076355 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:03.094117 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.094146 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:03.094167 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:03.094182 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:03.130756 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:03.130783 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:03.187120 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:03.179355    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.179917    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181530    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181944    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.183460    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:03.179355    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.179917    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181530    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181944    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.183460    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:03.187140 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:03.187153 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:03.207770 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:03.207804 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:03.244606 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:03.244642 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:03.295650 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:03.295686 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:03.351809 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:03.351844 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:03.379889 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:03.379922 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:03.406739 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:03.406767 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:03.427941 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:03.427967 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:05.948009 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:05.948483 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:05.948575 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:05.967373 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:06:05.967442 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:05.985899 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:06:05.985979 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:06.004170 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.004194 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:06.004250 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:06.022314 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:06.022386 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:06.039940 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.039963 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:06.040005 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:06.058068 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:06.058144 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:06.076569 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.076591 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:06.076631 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:06.094127 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.094153 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:06.094179 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:06.094193 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:06.119164 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:06.119195 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:06.140482 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:06.140517 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:06.190516 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:06.190551 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:06.212353 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:06.212385 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:06.248893 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:06.248919 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:06.302627 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:06.302664 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:06.329602 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:06.329633 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:06.385087 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:06.377651    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.378359    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.379718    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.380186    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.381710    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:06.377651    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.378359    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.379718    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.380186    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.381710    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:06.385113 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:06.385131 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:06.421810 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:06.421843 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:06:07.384103 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:09.883971 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:08.941210 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:06:11.884134 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:14.383873 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:13.941780 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:06:13.941906 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:13.960880 2163332 logs.go:282] 2 containers: [806e7ebaaed1 649f5e5c295c]
	I0804 10:06:13.960962 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:13.979358 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:13.979441 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:13.996946 2163332 logs.go:282] 0 containers: []
	W0804 10:06:13.996972 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:13.997025 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:14.015595 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:14.015668 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:14.034223 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.034246 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:14.034288 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:14.052124 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:14.052200 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:14.069965 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.069989 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:14.070032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:14.088436 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.088459 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:14.088473 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:14.088503 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:14.146648 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:14.146701 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:14.173008 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:14.173051 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 10:06:16.031588 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:06:16.384007 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:19.693397 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:06:20.978525 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:06:28.857368 2163332 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (14.684287631s)
	W0804 10:06:28.857442 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:24.221601    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:06:28.850442    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49502->[::1]:8443: read: connection reset by peer"
	E0804 10:06:28.851023    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.852675    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.853078    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:24.221601    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:06:28.850442    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49502->[::1]:8443: read: connection reset by peer"
	E0804 10:06:28.851023    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.852675    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.853078    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:28.857455 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:28.857466 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:28.857477 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.825848081s)
	W0804 10:06:28.857515 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49512->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:06:28.857580 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.164140796s)
	W0804 10:06:28.857620 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:06:28.857662 2163332 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49512->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W0804 10:06:28.857709 2163332 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:06:28.857875 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.879306724s)
	W0804 10:06:28.857914 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:06:28.857989 2163332 out.go:270] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:06:28.860496 2163332 out.go:177] * Enabled addons: 
	W0804 10:06:28.885498 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:06:28.861918 2163332 addons.go:514] duration metric: took 1m45.560958591s for enable addons: enabled=[]
	I0804 10:06:28.878501 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:28.878527 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:28.917388 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:28.917421 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:28.938499 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:28.938540 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:28.979902 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:28.979935 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:29.005867 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:29.005903 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	W0804 10:06:29.025877 2163332 logs.go:130] failed kube-apiserver [649f5e5c295c]: command: /bin/bash -c "docker logs --tail 400 649f5e5c295c" /bin/bash -c "docker logs --tail 400 649f5e5c295c": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 649f5e5c295c
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 649f5e5c295c
	
	** /stderr **
	I0804 10:06:29.025904 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:29.025916 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:29.076718 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:29.076759 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:31.597358 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:31.597799 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:31.597939 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:31.617008 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:31.617067 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:31.635937 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:31.636004 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:31.654450 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.654474 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:31.654531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:31.673162 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:31.673288 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:31.690681 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.690706 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:31.690759 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:31.712018 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:31.712111 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:31.729547 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.729576 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:31.729625 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:31.747479 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.747501 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:31.747513 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:31.747525 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:31.773882 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:31.773913 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:31.828620 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:31.821229    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.821688    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823253    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823731    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.825214    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:31.821229    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.821688    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823253    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823731    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.825214    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:31.828641 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:31.828655 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:31.854157 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:31.854190 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:31.873980 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:31.874004 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:31.910304 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:31.910342 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:31.931218 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:31.931246 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:31.969061 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:31.969091 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:32.019399 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:32.019436 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:32.040462 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:32.040488 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:32.059511 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:32.059540 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:34.622382 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:34.622843 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:34.622941 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:34.642832 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:34.642895 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:34.660588 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:34.660660 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:34.678855 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.678878 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:34.678922 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:34.698191 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:34.698282 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:34.716571 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.716593 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:34.716636 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:34.735252 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:34.735339 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:34.755152 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.755181 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:34.755230 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:34.773441 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.773472 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:34.773488 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:34.773500 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:34.793528 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:34.793556 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:34.812435 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:34.812465 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:34.837875 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:34.837905 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:34.858757 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:34.858786 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:34.878587 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:34.878614 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:34.916360 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:34.916391 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:34.982416 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:34.982452 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:35.039762 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:35.031976    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.032521    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034096    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034545    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.036090    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:35.031976    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.032521    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034096    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034545    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.036090    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:35.039782 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:35.039796 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:35.066299 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:35.066330 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:35.104670 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:35.104700 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:37.656360 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:37.656872 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:37.656969 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:37.675825 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:37.675894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	W0804 10:06:38.886603 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:06:37.694962 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:37.695023 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:37.712658 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.712684 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:37.712735 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:37.730728 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:37.730800 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:37.748576 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.748598 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:37.748640 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:37.767923 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:37.768007 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:37.785275 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.785298 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:37.785347 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:37.801999 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.802024 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:37.802055 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:37.802067 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:37.839050 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:37.839076 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:37.907098 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:37.907134 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:37.962875 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:37.955444    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.955922    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957526    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957895    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.959476    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:37.955444    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.955922    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957526    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957895    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.959476    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:37.962896 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:37.962916 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:37.988976 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:37.989004 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:38.011096 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:38.011124 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:38.049631 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:38.049661 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:38.102092 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:38.102126 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:38.124479 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:38.124506 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:38.144973 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:38.145000 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:38.170919 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:38.170951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:40.690387 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:40.690843 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:40.690940 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:40.710160 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:40.710230 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:40.727856 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:40.727940 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:40.745578 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.745605 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:40.745648 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:40.763453 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:40.763516 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:40.781764 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.781788 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:40.781839 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:40.799938 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:40.800013 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:40.817161 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.817187 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:40.817260 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:40.835239 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.835260 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:40.835279 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:40.835293 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:40.855149 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:40.855177 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:40.922877 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:40.922913 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:40.978296 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:40.970913    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.971466    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973009    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973412    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.974964    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:40.970913    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.971466    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973009    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973412    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.974964    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:40.978318 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:40.978339 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:41.004175 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:41.004205 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:41.025025 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:41.025053 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:41.061373 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:41.061413 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:41.087250 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:41.087278 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:41.107920 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:41.107947 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:41.148907 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:41.148937 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	W0804 10:06:41.383817 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:43.384045 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:43.699853 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:43.700314 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:43.700416 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:43.719695 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:43.719771 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:43.738313 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:43.738403 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:43.756507 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.756531 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:43.756574 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:43.775263 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:43.775363 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:43.793071 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.793109 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:43.793177 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:43.811134 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:43.811231 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:43.828955 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.828978 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:43.829038 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:43.847773 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.847793 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:43.847819 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:43.847831 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:43.873624 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:43.873653 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:43.894310 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:43.894337 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:43.945563 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:43.945599 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:43.966435 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:43.966465 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:43.984864 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:43.984889 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:44.024156 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:44.024192 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:44.060624 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:44.060652 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:44.125956 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:44.125999 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:44.152471 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:44.152508 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:44.207960 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:44.200436    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.200919    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202422    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202839    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.204356    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:44.200436    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.200919    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202422    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202839    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.204356    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:46.709332 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:46.709781 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:46.709868 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:46.729464 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:46.729567 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:46.748548 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:46.748644 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:46.766962 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.766986 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:46.767041 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:46.786525 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:46.786603 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:46.804285 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.804311 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:46.804360 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:46.822116 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:46.822209 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:46.839501 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.839530 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:46.839575 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:46.856689 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.856711 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:46.856728 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:46.856739 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:46.895336 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:46.895370 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:46.946627 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:46.946659 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:46.967302 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:46.967329 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:46.985945 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:46.985972 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:47.022376 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:47.022405 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:47.077558 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:47.069979    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.070438    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072002    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072443    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.074016    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:47.069979    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.070438    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072002    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072443    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.074016    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:47.077593 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:47.077609 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:47.097426 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:47.097453 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:47.160540 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:47.160577 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:47.186584 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:47.186612 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:06:45.883271 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:47.883345 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:49.883713 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:49.713880 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:49.714344 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:49.714431 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:49.732944 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:49.733002 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:49.751052 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:49.751129 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:49.769185 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.769207 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:49.769272 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:49.787184 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:49.787250 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:49.804791 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.804809 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:49.804849 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:49.823604 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:49.823673 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:49.840745 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.840766 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:49.840809 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:49.857681 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.857709 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:49.857729 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:49.857743 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:49.908402 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:49.908439 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:49.930280 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:49.930305 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:49.950867 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:49.950895 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:50.018519 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:50.018562 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:50.044619 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:50.044647 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:50.100753 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:50.092922    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.093459    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095094    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095578    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.097081    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:50.092922    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.093459    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095094    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095578    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.097081    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:50.100777 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:50.100793 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:50.125943 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:50.125970 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:50.146091 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:50.146117 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:50.181714 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:50.181742 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	W0804 10:06:52.383197 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:54.383379 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:52.721516 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:52.721956 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:52.722053 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:52.741758 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:52.741819 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:52.760560 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:52.760637 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:52.778049 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.778071 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:52.778133 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:52.796442 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:52.796515 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:52.813403 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.813433 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:52.813486 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:52.831370 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:52.831443 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:52.850355 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.850377 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:52.850418 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:52.868304 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.868329 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:52.868348 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:52.868362 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:52.909679 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:52.909712 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:52.959826 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:52.959860 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:52.980766 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:52.980792 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:53.000093 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:53.000123 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:53.066024 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:53.066063 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:53.122172 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:53.114825    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.115397    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.116943    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.117412    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.118938    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:53.114825    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.115397    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.116943    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.117412    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.118938    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:53.122200 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:53.122218 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:53.158613 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:53.158651 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:53.184392 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:53.184422 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:53.209845 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:53.209873 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:55.732938 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:55.733375 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:55.733476 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:55.752276 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:55.752356 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:55.770674 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:55.770750 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:55.788757 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.788778 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:55.788823 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:55.806924 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:55.806986 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:55.824084 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.824105 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:55.824163 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:55.842106 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:55.842195 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:55.859348 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.859376 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:55.859429 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:55.876943 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.876972 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:55.876990 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:55.877001 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:55.903338 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:55.903372 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:55.924802 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:55.924829 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:55.980125 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:55.972792    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.973342    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.974941    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.975429    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.976926    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:55.972792    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.973342    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.974941    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.975429    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.976926    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:55.980146 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:55.980161 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:56.000597 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:56.000622 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:56.037964 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:56.037996 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:56.088371 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:56.088407 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:56.107606 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:56.107634 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:56.143658 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:56.143689 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:56.211928 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:56.211963 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 10:06:56.383880 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:58.883846 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:58.738791 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:58.739253 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:58.739345 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:58.758672 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:58.758750 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:58.778125 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:58.778188 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:58.795601 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.795623 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:58.795675 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:58.814211 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:58.814275 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:58.831764 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.831790 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:58.831834 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:58.849466 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:58.849539 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:58.867398 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.867427 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:58.867484 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:58.885191 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.885215 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:58.885234 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:58.885262 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:58.911583 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:58.911610 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:58.950860 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:58.950893 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:59.004297 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:59.004333 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:59.025861 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:59.025889 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:59.046944 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:59.046973 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:59.085764 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:59.085794 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:59.158468 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:59.158508 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:59.184434 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:59.184462 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:59.239706 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:59.232043    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.232545    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234123    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234548    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.235973    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:59.232043    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.232545    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234123    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234548    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.235973    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:59.239735 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:59.239748 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:01.760780 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:01.761288 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:01.761386 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:01.781655 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:01.781741 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:01.799466 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:01.799533 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:01.817102 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.817126 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:01.817181 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:01.834957 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:01.835044 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:01.852872 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.852900 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:01.852951 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:01.870948 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:01.871014 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:01.890001 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.890026 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:01.890072 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:01.907730 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.907750 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:01.907767 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:01.907777 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:01.980222 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:01.980260 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:02.006847 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:02.006888 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:02.047297 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:02.047329 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:02.101227 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:02.101276 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:02.124099 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:02.124129 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:02.161273 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:02.161308 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:02.187147 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:02.187182 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:02.242852 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:02.235381    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.235858    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237451    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237924    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.239421    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:02.235381    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.235858    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237451    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237924    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.239421    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:02.242879 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:02.242893 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:02.264021 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:02.264048 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:07:01.383265 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:03.883186 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:04.785494 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:04.785952 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:04.786043 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:04.805356 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:04.805452 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:04.823966 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:04.824039 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:04.841949 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.841973 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:04.842019 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:04.859692 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:04.859761 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:04.877317 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.877341 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:04.877383 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:04.895958 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:04.896035 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:04.913348 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.913378 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:04.913426 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:04.931401 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.931427 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:04.931448 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:04.931461 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:04.951477 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:04.951507 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:05.001983 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:05.002019 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:05.023585 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:05.023619 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:05.044516 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:05.044549 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:05.113154 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:05.113195 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:05.170412 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:05.162898    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.163461    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165001    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165501    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.167026    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:05.162898    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.163461    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165001    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165501    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.167026    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:05.170434 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:05.170447 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:05.210151 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:05.210186 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:05.248755 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:05.248781 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:05.275317 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:05.275352 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:07:05.883315 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:07.884030 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:10.383933 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:07.801587 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:07.802063 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:07.802166 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:07.821137 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:07.821214 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:07.839463 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:07.839532 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:07.856871 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.856893 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:07.856938 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:07.875060 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:07.875136 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:07.896448 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.896477 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:07.896537 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:07.914334 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:07.914402 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:07.931616 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.931638 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:07.931680 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:07.950247 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.950268 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:07.950285 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:07.950295 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:07.974572 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:07.974603 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:07.994800 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:07.994827 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:08.013535 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:08.013565 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:08.048711 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:08.048738 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:08.075000 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:08.075029 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:08.095656 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:08.095681 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:08.135706 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:08.135742 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:08.189749 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:08.189780 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:08.264988 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:08.265028 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:08.321799 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:08.314236    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.314718    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316206    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316648    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.318128    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:08.314236    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.314718    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316206    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316648    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.318128    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:10.822388 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:10.822855 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:10.822962 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:10.842220 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:10.842299 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:10.860390 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:10.860467 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:10.878544 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.878567 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:10.878613 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:10.897953 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:10.898016 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:10.916393 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.916419 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:10.916474 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:10.933957 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:10.934052 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:10.951873 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.951901 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:10.951957 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:10.970046 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.970073 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:10.970101 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:10.970116 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:11.026141 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:11.018729    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.019305    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.020844    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.021228    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.022826    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:11.018729    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.019305    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.020844    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.021228    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.022826    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:11.026162 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:11.026174 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:11.052155 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:11.052183 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:11.091637 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:11.091670 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:11.142651 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:11.142684 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:11.164003 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:11.164034 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:11.200186 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:11.200214 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:11.270805 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:11.270846 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:11.297260 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:11.297295 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:11.318423 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:11.318449 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:07:12.883177 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:15.383259 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:13.838395 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:13.838840 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:13.838937 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:13.858880 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:13.858955 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:13.877417 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:13.877476 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:13.895850 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.895876 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:13.895919 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:13.914237 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:13.914304 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:13.932185 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.932214 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:13.932265 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:13.949806 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:13.949876 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:13.966753 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.966779 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:13.966837 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:13.984061 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.984080 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:13.984103 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:13.984118 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:14.024518 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:14.024551 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:14.075810 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:14.075839 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:14.096801 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:14.096835 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:14.134271 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:14.134298 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:14.210356 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:14.210398 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:14.266888 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:14.259329    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.259828    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.261517    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.262045    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.263609    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:14.259329    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.259828    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.261517    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.262045    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.263609    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:14.266911 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:14.266931 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:14.286729 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:14.286765 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:14.312819 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:14.312853 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:14.339716 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:14.339746 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:16.861870 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:16.862360 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:16.862459 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:16.882051 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:16.882134 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:16.900321 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:16.900401 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:16.917983 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.918006 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:16.918057 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:16.935570 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:16.935650 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:16.953434 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.953455 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:16.953497 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:16.971207 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:16.971281 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:16.989882 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.989911 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:16.989957 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:17.006985 2163332 logs.go:282] 0 containers: []
	W0804 10:07:17.007007 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:17.007022 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:17.007034 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:17.081700 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:17.081741 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:17.107769 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:17.107798 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:17.129048 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:17.129074 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:17.170571 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:17.170601 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:17.190971 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:17.191000 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:17.227194 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:17.227225 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:17.283198 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:17.275311    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.275794    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277411    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277858    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.279344    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:17.275311    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.275794    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277411    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277858    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.279344    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:17.283220 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:17.283236 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:17.309760 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:17.309789 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:17.358841 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:17.358871 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:07:17.383386 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:19.383988 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:19.880139 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:19.880622 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:19.880709 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:19.901098 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:19.901189 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:19.921388 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:19.921455 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:19.941720 2163332 logs.go:282] 0 containers: []
	W0804 10:07:19.941751 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:19.941808 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:19.963719 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:19.963807 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:19.982285 2163332 logs.go:282] 0 containers: []
	W0804 10:07:19.982315 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:19.982375 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:20.005165 2163332 logs.go:282] 2 containers: [db8e2ca87b17 5321aae275b7]
	I0804 10:07:20.005272 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:20.024272 2163332 logs.go:282] 0 containers: []
	W0804 10:07:20.024296 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:20.024349 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:20.066617 2163332 logs.go:282] 0 containers: []
	W0804 10:07:20.066648 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:20.066662 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:20.066674 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:21.883344 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:23.883950 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:26.383273 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:28.383629 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:30.384083 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:32.883295 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:34.883588 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:37.383240 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:39.383490 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:41.805018 2163332 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.738325489s)
	W0804 10:07:41.805054 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:30.119105    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:40.119975    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:41.799069    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:59078->[::1]:8443: read: connection reset by peer"
	E0804 10:07:41.799640    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:41.801276    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:30.119105    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:40.119975    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:41.799069    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:59078->[::1]:8443: read: connection reset by peer"
	E0804 10:07:41.799640    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:41.801276    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:41.805062 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:41.805073 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	W0804 10:07:41.824568 2163332 logs.go:130] failed etcd [62ad65a28324]: command: /bin/bash -c "docker logs --tail 400 62ad65a28324" /bin/bash -c "docker logs --tail 400 62ad65a28324": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 62ad65a28324
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 62ad65a28324
	
	** /stderr **
	I0804 10:07:41.824590 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:41.824606 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:41.866655 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:41.866687 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:41.918542 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:41.918580 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:41.940196 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:41.940228 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:41.980124 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:41.980151 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:07:41.999188 2163332 logs.go:130] failed kube-apiserver [806e7ebaaed1]: command: /bin/bash -c "docker logs --tail 400 806e7ebaaed1" /bin/bash -c "docker logs --tail 400 806e7ebaaed1": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 806e7ebaaed1
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 806e7ebaaed1
	
	** /stderr **
	I0804 10:07:41.999208 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:41.999222 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:42.021383 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:42.021413 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	W0804 10:07:42.040097 2163332 logs.go:130] failed kube-controller-manager [5321aae275b7]: command: /bin/bash -c "docker logs --tail 400 5321aae275b7" /bin/bash -c "docker logs --tail 400 5321aae275b7": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 5321aae275b7
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 5321aae275b7
	
	** /stderr **
	I0804 10:07:42.040121 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:42.040140 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:42.121467 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:42.121517 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 10:07:41.384132 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:43.883489 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:44.649035 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:44.649550 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:44.649655 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:44.668446 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:44.668531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:44.686095 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:44.686171 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:44.705643 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.705669 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:44.705736 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:44.724574 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:44.724643 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:44.743534 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.743556 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:44.743599 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:44.762338 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:44.762422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:44.782440 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.782464 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:44.782511 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:44.800457 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.800482 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:44.800503 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:44.800519 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:44.828987 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:44.829024 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:44.851349 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:44.851380 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:44.891887 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:44.891921 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:44.942771 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:44.942809 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:44.963910 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:44.963936 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:44.982991 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:44.983018 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:45.019697 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:45.019724 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:45.098143 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:45.098181 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:45.156899 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:45.149340    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.149889    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151529    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151954    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.153458    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:45.149340    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.149889    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151529    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151954    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.153458    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:45.156923 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:45.156936 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:47.685272 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:47.685730 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:47.685821 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W0804 10:07:45.884049 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:48.383460 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:50.384087 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:47.705698 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:47.705776 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:47.723486 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:47.723559 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:47.740254 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.740277 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:47.740328 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:47.758844 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:47.758912 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:47.776147 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.776169 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:47.776209 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:47.794049 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:47.794120 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:47.810872 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.810892 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:47.810933 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:47.828618 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.828639 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:47.828655 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:47.828665 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:47.884561 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:47.876612    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.877177    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.878713    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.879149    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.880641    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:47.876612    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.877177    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.878713    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.879149    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.880641    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:47.884591 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:47.884608 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:47.910602 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:47.910632 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:47.931635 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:47.931662 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:47.974664 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:47.974698 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:48.026673 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:48.026707 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:48.047596 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:48.047624 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:48.084322 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:48.084354 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:48.162716 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:48.162754 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:48.189072 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:48.189103 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:50.709307 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:50.709704 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:50.709797 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:50.728631 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:50.728711 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:50.747056 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:50.747128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:50.764837 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.764861 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:50.764907 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:50.783351 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:50.783422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:50.801048 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.801068 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:50.801112 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:50.819524 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:50.819605 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:50.837558 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.837583 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:50.837635 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:50.855272 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.855300 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:50.855315 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:50.855334 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:50.875612 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:50.875640 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:50.895850 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:50.895876 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:50.976003 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:50.976045 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:51.002688 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:51.002724 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:51.045612 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:51.045644 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:51.098299 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:51.098331 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:51.135309 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:51.135342 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:51.191580 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:51.183846    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.184481    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186082    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186483    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.188015    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:51.183846    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.184481    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186082    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186483    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.188015    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:51.191601 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:51.191615 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:51.218895 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:51.218923 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	W0804 10:07:52.883308 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:54.883712 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:53.739326 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:53.739815 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:53.739915 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:53.760078 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:53.760152 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:53.778771 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:53.778848 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:53.796996 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.797026 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:53.797075 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:53.815962 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:53.816032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:53.833919 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.833942 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:53.833991 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:53.852829 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:53.852894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:53.870544 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.870572 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:53.870620 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:53.888900 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.888923 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:53.888941 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:53.888954 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:53.909456 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:53.909482 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:53.959416 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:53.959451 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:53.979376 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:53.979406 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:54.015365 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:54.015393 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:54.092580 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:54.092627 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:54.119325 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:54.119436 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:54.178242 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:54.170338    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.171010    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172560    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172976    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.174509    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:54.170338    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.171010    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172560    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172976    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.174509    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:54.178266 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:54.178288 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:54.205571 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:54.205602 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:54.226781 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:54.226811 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:56.772513 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:56.773019 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:56.773137 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:56.792596 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:56.792666 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:56.810823 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:56.810896 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:56.828450 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.828480 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:56.828532 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:56.847167 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:56.847237 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:56.866291 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.866315 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:56.866358 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:56.884828 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:56.884907 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:56.905059 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.905088 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:56.905134 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:56.923381 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.923417 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:56.923435 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:56.923447 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:56.943931 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:56.943957 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:56.986803 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:56.986835 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:57.013326 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:57.013360 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:57.068200 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:57.060866    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.061398    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.062981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.063498    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.064981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:57.060866    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.061398    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.062981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.063498    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.064981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:57.068220 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:57.068232 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:57.093915 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:57.093943 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:57.144935 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:57.144969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:57.166788 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:57.166813 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:57.188225 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:57.188254 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:57.224405 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:57.224433 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 10:07:56.883778 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:59.383176 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:59.805597 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:59.806058 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:59.806152 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:59.824866 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:59.824944 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:59.843663 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:59.843753 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:59.861286 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.861306 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:59.861356 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:59.880494 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:59.880573 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:59.898827 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.898851 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:59.898894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:59.917517 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:59.917584 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:59.935879 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.935906 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:59.935963 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:59.954233 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.954264 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:59.954284 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:59.954302 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:59.980238 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:59.980271 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:00.037175 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:00.029528    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.030067    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.031620    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.032023    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.033553    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:00.029528    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.030067    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.031620    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.032023    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.033553    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:00.037200 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:00.037215 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:00.079854 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:00.079889 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:00.117813 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:00.117842 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:00.199625 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:00.199671 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:00.225938 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:00.225969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:00.246825 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:00.246857 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:00.300311 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:00.300362 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:00.322075 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:00.322105 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:08:01.383269 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:02.842602 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:02.843031 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:02.843128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:02.862419 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:02.862503 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:02.881322 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:02.881409 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:02.902962 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.902986 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:02.903039 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:02.922238 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:02.922315 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:02.940312 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.940340 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:02.940391 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:02.960494 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:02.960580 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:02.978877 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.978915 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:02.978977 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:02.996894 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.996918 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:02.996937 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:02.996951 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:03.060369 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:03.060412 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:03.100294 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:03.100320 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:03.128232 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:03.128269 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:03.149215 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:03.149276 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:03.168809 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:03.168839 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:03.244969 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:03.245019 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:03.302519 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:03.294536    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.295054    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.296664    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.297129    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.298652    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:03.294536    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.295054    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.296664    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.297129    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.298652    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:03.302541 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:03.302555 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:03.328592 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:03.328621 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:03.349409 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:03.349436 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:05.892519 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:05.892926 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:05.893018 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:05.912863 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:05.912930 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:05.931765 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:05.931842 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:05.949624 2163332 logs.go:282] 0 containers: []
	W0804 10:08:05.949651 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:05.949706 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:05.969017 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:05.969096 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:05.987253 2163332 logs.go:282] 0 containers: []
	W0804 10:08:05.987279 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:05.987338 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:06.006096 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:06.006174 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:06.023866 2163332 logs.go:282] 0 containers: []
	W0804 10:08:06.023898 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:06.023955 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:06.041554 2163332 logs.go:282] 0 containers: []
	W0804 10:08:06.041574 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:06.041592 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:06.041603 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:06.078088 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:06.078114 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:06.160862 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:06.160907 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:06.187395 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:06.187425 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:06.243359 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:06.235931    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.236430    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.237921    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.238444    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.239969    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:06.235931    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.236430    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.237921    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.238444    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.239969    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:06.243387 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:06.243404 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:06.269689 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:06.269719 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:06.290404 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:06.290435 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:06.310595 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:06.310619 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:06.330304 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:06.330331 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:06.372930 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:06.372969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:08.923937 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:08.924354 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:08.924450 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:08.943688 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:08.943758 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:08.963008 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:08.963079 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:08.981372 2163332 logs.go:282] 0 containers: []
	W0804 10:08:08.981400 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:08.981453 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:08.999509 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:08.999592 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:09.017857 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.017881 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:09.017930 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:09.036581 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:09.036643 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:09.054584 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.054613 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:09.054666 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:09.072888 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.072924 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:09.072949 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:09.072965 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:09.149606 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:09.149645 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:09.178148 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:09.178185 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:09.222507 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:09.222544 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:09.275195 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:09.275235 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:09.299125 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:09.299159 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:09.319703 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:09.319747 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:09.346880 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:09.346922 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:09.404327 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:09.396630    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.397126    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.398704    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.399191    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.400813    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:09.396630    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.397126    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.398704    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.399191    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.400813    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:09.404352 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:09.404367 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:09.425425 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:09.425452 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:11.963472 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:11.963939 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:11.964032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:11.983012 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:11.983080 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:12.001567 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:12.001629 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:12.019335 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.019361 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:12.019428 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:12.038818 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:12.038893 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:12.056951 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.056978 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:12.057022 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:12.075232 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:12.075305 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:12.092737 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.092758 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:12.092800 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:12.109994 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.110024 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:12.110044 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:12.110055 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:12.166801 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:12.158687   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.159257   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.160910   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.161382   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.162961   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:12.158687   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.159257   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.160910   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.161382   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.162961   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:12.166825 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:12.166842 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:12.192505 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:12.192533 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:12.213260 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:12.213294 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:12.234230 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:12.234264 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:12.254032 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:12.254068 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:12.336496 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:12.336538 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:12.362829 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:12.362860 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:12.404783 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:12.404822 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:12.456932 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:12.456963 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:08:12.885483 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:08:14.998006 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:14.998459 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:14.998558 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:15.018639 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:15.018726 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:15.037594 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:15.037664 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:15.055647 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.055675 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:15.055720 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:15.073464 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:15.073538 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:15.091563 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.091588 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:15.091636 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:15.110381 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:15.110457 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:15.128744 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.128766 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:15.128811 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:15.147315 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.147336 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:15.147350 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:15.147369 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:15.167872 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:15.167908 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:15.211657 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:15.211690 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:15.233001 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:15.233026 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:15.252541 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:15.252580 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:15.291017 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:15.291044 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:15.316967 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:15.317004 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:15.343514 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:15.343543 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:15.394164 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:15.394201 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:15.475808 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:15.475847 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:15.532790 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:15.525410   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.525962   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527526   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527890   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.529344   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:15.525410   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.525962   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527526   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527890   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.529344   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:18.033614 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:18.034099 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:18.034190 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:18.053426 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:18.053519 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:18.072396 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:18.072461 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:18.090428 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.090453 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:18.090519 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:18.109580 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:18.109661 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:18.127869 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.127899 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:18.127954 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:18.146622 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:18.146695 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:18.165973 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.165995 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:18.166038 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:18.183152 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.183175 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:18.183190 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:18.183204 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:18.239841 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:18.232099   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.232612   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234166   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234591   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.236113   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:18.232099   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.232612   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234166   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234591   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.236113   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:18.239862 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:18.239874 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:18.260920 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:18.260946 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:18.304135 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:18.304170 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:18.356641 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:18.356679 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:18.376311 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:18.376341 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:18.460920 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:18.460965 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:18.488725 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:18.488755 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:18.509858 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:18.509894 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:18.546219 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:18.546248 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:21.073317 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:21.073860 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:21.073971 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:21.093222 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:21.093346 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:21.111951 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:21.112042 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:21.130287 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.130308 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:21.130359 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:21.148384 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:21.148471 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:21.166576 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.166604 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:21.166652 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:21.185348 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:21.185427 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:21.203596 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.203622 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:21.203681 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:21.221592 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.221620 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:21.221640 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:21.221652 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:21.277441 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:21.269692   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.270305   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.271725   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.272213   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.273739   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:21.269692   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.270305   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.271725   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.272213   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.273739   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:21.277466 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:21.277482 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:21.298481 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:21.298511 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:21.350381 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:21.350418 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:21.371474 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:21.371501 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:21.408284 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:21.408313 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:21.485994 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:21.486031 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:21.512310 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:21.512339 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:21.539196 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:21.539228 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:21.581887 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:21.581920 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:08:22.886436 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	W0804 10:08:25.383211 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:24.102885 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:24.103356 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:24.103454 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:24.123078 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:24.123144 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:24.141483 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:24.141545 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:24.159538 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.159565 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:24.159610 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:24.177499 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:24.177574 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:24.195218 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.195246 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:24.195289 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:24.213410 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:24.213501 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:24.231595 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.231619 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:24.231675 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:24.250451 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.250478 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:24.250497 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:24.250511 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:24.269653 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:24.269681 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:24.348982 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:24.349027 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:24.405452 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:24.397972   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.398529   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400132   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400600   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.402109   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:24.397972   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.398529   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400132   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400600   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.402109   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:24.405476 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:24.405491 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:24.431565 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:24.431593 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:24.469920 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:24.469948 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:24.495911 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:24.495942 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:24.516767 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:24.516796 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:24.559809 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:24.559846 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:24.612215 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:24.612251 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:27.134399 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:27.134902 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:27.135016 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:27.154460 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:27.154526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:27.172467 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:27.172537 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:27.190547 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.190571 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:27.190626 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:27.208406 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:27.208478 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:27.226270 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.226293 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:27.226347 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:27.244648 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:27.244710 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:27.262363 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.262384 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:27.262429 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:27.280761 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.280791 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:27.280811 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:27.280828 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:27.337516 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:27.329752   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.330367   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.331865   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.332331   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.333862   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:27.329752   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.330367   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.331865   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.332331   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.333862   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:27.337538 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:27.337554 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:27.383205 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:27.383237 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:27.402831 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:27.402863 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:27.439987 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:27.440016 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:27.467188 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:27.467220 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:27.488626 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:27.488651 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:27.538307 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:27.538341 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:27.558848 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:27.558875 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:27.640317 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:27.640360 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 10:08:27.383261 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:29.883318 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:30.169015 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:30.169492 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:30.169591 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:30.188919 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:30.189000 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:30.208903 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:30.208986 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:30.226974 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.227006 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:30.227061 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:30.245555 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:30.245625 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:30.263987 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.264013 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:30.264059 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:30.282944 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:30.283023 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:30.301744 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.301773 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:30.301834 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:30.320893 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.320919 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:30.320936 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:30.320951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:30.397888 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:30.397925 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:30.418812 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:30.418837 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:30.464089 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:30.464123 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:30.484745 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:30.484778 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:30.504805 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:30.504837 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:30.530475 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:30.530511 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:30.586445 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:30.578622   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.579233   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.580788   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.581197   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.582760   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:30.578622   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.579233   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.580788   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.581197   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.582760   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:30.586465 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:30.586478 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:30.613024 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:30.613054 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:30.666024 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:30.666060 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:08:31.883721 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:34.383160 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:33.203579 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:33.204060 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:33.204180 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:33.223272 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:33.223341 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:33.242111 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:33.242191 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:33.260564 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.260587 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:33.260632 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:33.279120 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:33.279198 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:33.297558 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.297581 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:33.297626 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:33.315911 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:33.315987 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:33.334504 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.334534 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:33.334594 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:33.352831 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.352855 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:33.352876 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:33.352891 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:33.431146 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:33.431188 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:33.457483 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:33.457516 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:33.512587 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:33.505280   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.505794   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507387   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507829   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.509409   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:33.505280   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.505794   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507387   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507829   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.509409   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:33.512614 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:33.512630 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:33.563154 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:33.563186 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:33.584703 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:33.584730 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:33.603831 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:33.603862 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:33.641549 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:33.641579 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:33.667027 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:33.667056 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:33.688258 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:33.688291 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:36.234388 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:36.234842 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:36.234932 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:36.253452 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:36.253531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:36.272517 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:36.272578 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:36.290793 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.290815 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:36.290859 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:36.309868 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:36.309951 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:36.328038 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.328065 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:36.328128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:36.346447 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:36.346526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:36.364698 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.364720 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:36.364774 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:36.382618 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.382649 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:36.382672 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:36.382687 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:36.460757 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:36.460795 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:36.517181 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:36.509281   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.509826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511400   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.513375   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:36.509281   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.509826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511400   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.513375   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:36.517202 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:36.517218 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:36.570857 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:36.570896 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:36.590896 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:36.590929 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:36.616290 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:36.616323 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:36.643271 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:36.643298 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:36.663678 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:36.663704 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:36.708665 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:36.708695 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:36.729524 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:36.729551 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:08:36.383928 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:38.883516 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:39.267469 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:39.267990 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:39.268120 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:39.287780 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:39.287877 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:39.307153 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:39.307248 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:39.326719 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.326752 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:39.326810 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:39.345319 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:39.345387 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:39.363424 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.363455 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:39.363511 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:39.381746 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:39.381825 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:39.399785 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.399809 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:39.399862 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:39.419064 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.419095 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:39.419121 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:39.419136 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:39.501950 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:39.501998 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:39.528491 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:39.528525 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:39.585466 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:39.578061   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.578577   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580045   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580462   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.581949   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:39.578061   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.578577   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580045   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580462   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.581949   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:39.585497 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:39.585518 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:39.611559 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:39.611590 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:39.632402 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:39.632438 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:39.677721 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:39.677758 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:39.728453 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:39.728487 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:39.752029 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:39.752060 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:39.772376 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:39.772408 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:42.311175 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:42.311726 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:42.311836 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:42.331694 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:42.331761 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:42.350128 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:42.350202 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:42.368335 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.368358 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:42.368411 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:42.385942 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:42.386020 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:42.403768 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.403788 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:42.403840 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:42.422612 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:42.422679 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:42.439585 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.439609 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:42.439659 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:42.457208 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.457229 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:42.457263 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:42.457279 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:42.535545 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:42.535578 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:42.561612 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:42.561641 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:42.616811 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:42.609048   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.609673   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611215   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611642   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.613094   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:42.609048   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.609673   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611215   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611642   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.613094   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:42.616832 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:42.616847 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:42.643211 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:42.643240 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:42.663882 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:42.663910 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:42.683025 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:42.683052 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:42.722746 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:42.722772 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:42.743550 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:42.743589 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:42.788986 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:42.789023 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:45.340596 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:45.341080 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:45.343076 2163332 out.go:201] 
	W0804 10:08:45.344232 2163332 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0804 10:08:45.344248 2163332 out.go:270] * 
	W0804 10:08:45.346020 2163332 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 10:08:45.347852 2163332 out.go:201] 
	W0804 10:08:40.883920 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:42.884060 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:45.384074 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	
	
	==> Docker <==
	Aug 04 10:04:39 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:39Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Aug 04 10:04:39 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:39Z" level=info msg="Loaded network plugin cni"
	Aug 04 10:04:39 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:39Z" level=info msg="Docker cri networking managed by network plugin cni"
	Aug 04 10:04:39 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:39Z" level=info msg="Setting cgroupDriver cgroupfs"
	Aug 04 10:04:39 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Aug 04 10:04:39 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:39Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Aug 04 10:04:39 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:39Z" level=info msg="Start cri-dockerd grpc backend"
	Aug 04 10:04:39 newest-cni-768931 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Aug 04 10:04:43 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8291adcc91b97cb252a24d35036c5efbb0996a08027e74bce7b3e0a6bf9a48cf/resolv.conf as [nameserver 192.168.76.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Aug 04 10:04:43 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2bc437b51e69e3c519e0761ce89040cfdde58b82f6e145391cd6e0c2ab5e208e/resolv.conf as [nameserver 192.168.76.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 10:04:43 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/662feb1b8623b8a2e29aa4611d37b1170731bd5f7a2dc897b5f52883c376bec1/resolv.conf as [nameserver 192.168.76.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 10:04:43 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4c205ed51dffe9b5b86784e923411ac6c4cd45de2c5e2e4648ad44b601456c17/resolv.conf as [nameserver 192.168.76.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Aug 04 10:04:44 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:04:44.183658975Z" level=info msg="ignoring event" container=cf7f705039858fd1e9136035e31987c37daa6edfab66c046bf64e03096b58692 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:02 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:05:02.012772715Z" level=info msg="ignoring event" container=2d096260eba4cf41bd065888c7f500814d5de630a1b1fc361f3947127b35e4fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:05 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:05:05.203874486Z" level=info msg="ignoring event" container=059756d38779c9ce2222befd10f7581bfad8f269e0d6bfe172215d53cbd82572 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:06 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:05:06.230520357Z" level=info msg="ignoring event" container=e3a6308944b3d968179e3c495ba3e3438fbf285b19cf9bbf07d2965692300547 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:30 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:05:30.005635105Z" level=info msg="ignoring event" container=bf239ceabd3147fe0e012eb9801492d77876a7ddd93fc0159b21dd207d7c3afc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:43 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:05:43.942876680Z" level=info msg="ignoring event" container=649f5e5c295c89600065ff6074421cadc3ed95db0690cfcfe15ce4a3ac4ac6db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:44 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:05:44.965198998Z" level=info msg="ignoring event" container=69f71bfef17b06cc8a5dc342463c94500db45e0e165608d96196bb1b17386196 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:06:12 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:06:12.006979008Z" level=info msg="ignoring event" container=62ad65a28324db44aec25b62a7b821e13717955c2910052ef5c10903fccd8507 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:06:27 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:06:27.859049038Z" level=info msg="ignoring event" container=806e7ebaaed1d1e4b1ed1116680ed33d3a9dc5d38319656b66d38586e6c02dea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:06:38 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:06:38.884007403Z" level=info msg="ignoring event" container=5321aae275b78662386b9386b19106ba3fd44d1c6a82e71ef1952c2c46335d24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:07:35 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:07:35.009931999Z" level=info msg="ignoring event" container=1f24d4315f70231c2695d277a5b8b9d24336254281ca6e077105280d5e5f618f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:07:40 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:07:40.764373891Z" level=info msg="ignoring event" container=db8e2ca87b17366e2e40aa7f7717aab1abd1be0b804290d9c2836790e07bc239 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:07:40 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:07:40.807931491Z" level=info msg="ignoring event" container=546ccc0d47d3f88d8d23afa8e595ee1538bdb059d62110fe9c682afd3e017027 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1f24d4315f702       1e30c0b1e9b99       About a minute ago   Exited              etcd                      10                  8291adcc91b97       etcd-newest-cni-768931
	546ccc0d47d3f       d85eea91cc41d       About a minute ago   Exited              kube-apiserver            9                   2bc437b51e69e       kube-apiserver-newest-cni-768931
	db8e2ca87b173       9ad783615e1bc       About a minute ago   Exited              kube-controller-manager   9                   4c205ed51dffe       kube-controller-manager-newest-cni-768931
	4d9bcb7668482       21d34a2aeacf5       4 minutes ago        Running             kube-scheduler            1                   662feb1b8623b       kube-scheduler-newest-cni-768931
	89bc4723825bb       21d34a2aeacf5       9 minutes ago        Exited              kube-scheduler            0                   6c135c15276d7       kube-scheduler-newest-cni-768931
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:46.733627   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:46.734137   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:46.736554   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:46.737065   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:46.738570   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.003976] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-30ac57a033af
	[  +0.000006] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +3.807738] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000008] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.000000] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.251962] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-30ac57a033af
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-30ac57a033af
	[  +0.000007] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.000000] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +7.935446] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000007] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000034] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.003972] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000005] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[ +23.237968] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 e9 0e 42 0b 64 08 06
	[  +0.000446] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 d5 e2 93 f6 db 08 06
	[Aug 4 10:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da a7 c8 ad 52 b3 08 06
	[  +0.000606] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff da d5 10 fe 4e 73 08 06
	
	
	==> etcd [1f24d4315f70] <==
	flag provided but not defined: -proxy-refresh-interval
	Usage:
	
	  etcd [flags]
	    Start an etcd server.
	
	  etcd --version
	    Show the version of etcd.
	
	  etcd -h | --help
	    Show the help information about etcd.
	
	  etcd --config-file
	    Path to the server configuration file. Note that if a configuration file is provided, other command line flags and environment variables will be ignored.
	
	  etcd gateway
	    Run the stateless pass-through etcd TCP connection forwarding proxy.
	
	  etcd grpc-proxy
	    Run the stateless etcd v3 gRPC L7 reverse proxy.
	
	
	
	==> kernel <==
	 10:08:46 up 1 day, 18:50,  0 users,  load average: 0.52, 1.28, 1.70
	Linux newest-cni-768931 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [546ccc0d47d3] <==
	W0804 10:07:20.778159       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:07:20.778231       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 10:07:20.779591       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 10:07:20.786381       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 10:07:20.791581       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 10:07:20.791602       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 10:07:20.791822       1 instance.go:232] Using reconciler: lease
	W0804 10:07:20.792693       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0804 10:07:20.792776       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:07:21.779364       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:07:21.779376       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:07:21.793176       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:07:23.134474       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:07:23.471015       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:07:23.691883       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:07:25.771273       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:07:26.083118       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:07:26.444232       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:07:29.395985       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:07:30.150258       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:07:30.475480       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:07:35.591437       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:07:37.064195       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:07:37.473407       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 10:07:40.793465       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [db8e2ca87b17] <==
	I0804 10:07:19.258524       1 serving.go:386] Generated self-signed cert in-memory
	I0804 10:07:19.728823       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 10:07:19.728848       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 10:07:19.730316       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 10:07:19.730331       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 10:07:19.730674       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 10:07:19.730778       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 10:07:40.734487       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.76.2:8443/healthz\": net/http: TLS handshake timeout"
	
	
	==> kube-scheduler [4d9bcb766848] <==
	E0804 10:07:34.726287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 10:07:36.139104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.76.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 10:07:42.350918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:07:48.282140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 10:07:49.832242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 10:07:50.582400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 10:07:53.494288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 10:07:53.616535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 10:07:54.098272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:07:55.535515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 10:08:02.280209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 10:08:02.884075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:08:04.005685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 10:08:08.149988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:08:15.011870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 10:08:17.251091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.76.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 10:08:21.696623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 10:08:22.519039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 10:08:24.418812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:08:27.814522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:08:31.976195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 10:08:32.712898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 10:08:33.723365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:08:44.369034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 10:08:46.240912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	
	
	==> kube-scheduler [89bc4723825b] <==
	E0804 10:03:40.497585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:03:41.644446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.76.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 10:03:46.793027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 10:03:47.129343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 10:03:47.498649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:03:49.482712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:60970->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:03:49.482712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:43076->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 10:03:49.482712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:60974->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 10:03:49.482728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:43066->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:03:49.518652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:03:52.175953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 10:03:52.381066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 10:04:06.761064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 10:04:06.975695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 10:04:08.623458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 10:04:16.963592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 10:04:22.569447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:04:23.629502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 10:04:24.298423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:04:25.174292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 10:04:25.897947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:04:28.497132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 10:04:29.219349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 10:04:31.307534       1 server.go:274] "handlers are not fully synchronized" err="context canceled"
	E0804 10:04:31.307656       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 04 10:08:30 newest-cni-768931 kubelet[1564]: E0804 10:08:30.884637    1564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-newest-cni-768931_kube-system(59d53768f66016db0d7a945479ffe178)\"" pod="kube-system/kube-apiserver-newest-cni-768931" podUID="59d53768f66016db0d7a945479ffe178"
	Aug 04 10:08:31 newest-cni-768931 kubelet[1564]: E0804 10:08:31.441201    1564 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.76.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{newest-cni-768931.185888448aecec97  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:newest-cni-768931,UID:newest-cni-768931,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:newest-cni-768931,},FirstTimestamp:2025-08-04 10:04:42.830744727 +0000 UTC m=+0.062612062,LastTimestamp:2025-08-04 10:04:42.830744727 +0000 UTC m=+0.062612062,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:newest-cni-768931,}"
	Aug 04 10:08:31 newest-cni-768931 kubelet[1564]: E0804 10:08:31.952316    1564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/newest-cni-768931?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Aug 04 10:08:32 newest-cni-768931 kubelet[1564]: I0804 10:08:32.842768    1564 kubelet_node_status.go:75] "Attempting to register node" node="newest-cni-768931"
	Aug 04 10:08:32 newest-cni-768931 kubelet[1564]: E0804 10:08:32.843094    1564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.76.2:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="newest-cni-768931"
	Aug 04 10:08:32 newest-cni-768931 kubelet[1564]: E0804 10:08:32.910510    1564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"newest-cni-768931\" not found"
	Aug 04 10:08:32 newest-cni-768931 kubelet[1564]: E0804 10:08:32.981301    1564 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Aug 04 10:08:36 newest-cni-768931 kubelet[1564]: E0804 10:08:36.884489    1564 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-768931\" not found" node="newest-cni-768931"
	Aug 04 10:08:36 newest-cni-768931 kubelet[1564]: I0804 10:08:36.884590    1564 scope.go:117] "RemoveContainer" containerID="1f24d4315f70231c2695d277a5b8b9d24336254281ca6e077105280d5e5f618f"
	Aug 04 10:08:36 newest-cni-768931 kubelet[1564]: E0804 10:08:36.884650    1564 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-768931\" not found" node="newest-cni-768931"
	Aug 04 10:08:36 newest-cni-768931 kubelet[1564]: I0804 10:08:36.884713    1564 scope.go:117] "RemoveContainer" containerID="db8e2ca87b17366e2e40aa7f7717aab1abd1be0b804290d9c2836790e07bc239"
	Aug 04 10:08:36 newest-cni-768931 kubelet[1564]: E0804 10:08:36.884765    1564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-newest-cni-768931_kube-system(0a578c02c1067bda6f15c5033e01f33e)\"" pod="kube-system/etcd-newest-cni-768931" podUID="0a578c02c1067bda6f15c5033e01f33e"
	Aug 04 10:08:36 newest-cni-768931 kubelet[1564]: E0804 10:08:36.884835    1564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-newest-cni-768931_kube-system(05d4f75e5879bee8e6895966620bd9b4)\"" pod="kube-system/kube-controller-manager-newest-cni-768931" podUID="05d4f75e5879bee8e6895966620bd9b4"
	Aug 04 10:08:38 newest-cni-768931 kubelet[1564]: E0804 10:08:38.953150    1564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/newest-cni-768931?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Aug 04 10:08:39 newest-cni-768931 kubelet[1564]: I0804 10:08:39.844225    1564 kubelet_node_status.go:75] "Attempting to register node" node="newest-cni-768931"
	Aug 04 10:08:39 newest-cni-768931 kubelet[1564]: E0804 10:08:39.844618    1564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.76.2:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="newest-cni-768931"
	Aug 04 10:08:40 newest-cni-768931 kubelet[1564]: E0804 10:08:40.889028    1564 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.76.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Aug 04 10:08:41 newest-cni-768931 kubelet[1564]: E0804 10:08:41.442787    1564 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.76.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{newest-cni-768931.185888448aecec97  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:newest-cni-768931,UID:newest-cni-768931,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:newest-cni-768931,},FirstTimestamp:2025-08-04 10:04:42.830744727 +0000 UTC m=+0.062612062,LastTimestamp:2025-08-04 10:04:42.830744727 +0000 UTC m=+0.062612062,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:newest-cni-768931,}"
	Aug 04 10:08:42 newest-cni-768931 kubelet[1564]: E0804 10:08:42.911174    1564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"newest-cni-768931\" not found"
	Aug 04 10:08:43 newest-cni-768931 kubelet[1564]: E0804 10:08:43.883827    1564 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-768931\" not found" node="newest-cni-768931"
	Aug 04 10:08:43 newest-cni-768931 kubelet[1564]: I0804 10:08:43.883923    1564 scope.go:117] "RemoveContainer" containerID="546ccc0d47d3f88d8d23afa8e595ee1538bdb059d62110fe9c682afd3e017027"
	Aug 04 10:08:43 newest-cni-768931 kubelet[1564]: E0804 10:08:43.884113    1564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-newest-cni-768931_kube-system(59d53768f66016db0d7a945479ffe178)\"" pod="kube-system/kube-apiserver-newest-cni-768931" podUID="59d53768f66016db0d7a945479ffe178"
	Aug 04 10:08:45 newest-cni-768931 kubelet[1564]: E0804 10:08:45.953798    1564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/newest-cni-768931?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Aug 04 10:08:46 newest-cni-768931 kubelet[1564]: I0804 10:08:46.846195    1564 kubelet_node_status.go:75] "Attempting to register node" node="newest-cni-768931"
	Aug 04 10:08:46 newest-cni-768931 kubelet[1564]: E0804 10:08:46.846630    1564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.76.2:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="newest-cni-768931"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-768931 -n newest-cni-768931
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-768931 -n newest-cni-768931: exit status 2 (277.752062ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "newest-cni-768931" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (254.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (26.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-768931 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-768931 -n newest-cni-768931
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-768931 -n newest-cni-768931: exit status 2 (264.761486ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-768931 -n newest-cni-768931
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-768931 -n newest-cni-768931: exit status 2 (264.469458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-768931 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-768931 -n newest-cni-768931
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-768931 -n newest-cni-768931: exit status 2 (302.718494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-768931 -n newest-cni-768931
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-768931 -n newest-cni-768931: exit status 2 (292.633724ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-768931
helpers_test.go:235: (dbg) docker inspect newest-cni-768931:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd",
	        "Created": "2025-08-04T09:54:35.028106074Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2163578,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T10:04:32.896051547Z",
	            "FinishedAt": "2025-08-04T10:04:31.554642323Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/hostname",
	        "HostsPath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/hosts",
	        "LogPath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd-json.log",
	        "Name": "/newest-cni-768931",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-768931:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-768931",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd",
	                "LowerDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-768931",
	                "Source": "/var/lib/docker/volumes/newest-cni-768931/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-768931",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-768931",
	                "name.minikube.sigs.k8s.io": "newest-cni-768931",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d496a379f643afdf0008eeaa73490cdbbab104feff9921da81864e373d58ba90",
	            "SandboxKey": "/var/run/docker/netns/d496a379f643",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-768931": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:1d:38:75:59:39",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b469f2b8beae070883e49bfb67a442aa4bbac8703dfdd341c34c8d2ed3e42c07",
	                    "EndpointID": "349a5e6b8e6d705e3fe7a8f3cfcd94606e43e7038005d90f73899543e4f770f1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-768931",
	                        "056ddd51825a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-768931 -n newest-cni-768931
E0804 10:08:53.892136 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:08:55.497389 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:08:58.322988 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:00.560407 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/bridge-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:00.566751 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/bridge-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:00.578070 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/bridge-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:00.599467 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/bridge-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:00.640895 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/bridge-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:00.722542 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/bridge-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:00.883790 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/bridge-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:01.205474 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/bridge-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:01.847571 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/bridge-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:03.129350 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/bridge-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:03.491637 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:05.087111 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:05.141516 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubenet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:05.147881 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubenet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:05.159202 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubenet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:05.180534 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubenet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:05.221880 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubenet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:05.303450 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubenet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:05.464966 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubenet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:05.691675 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/bridge-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:05.787142 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubenet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:06.429187 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubenet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-768931 -n newest-cni-768931: exit status 2 (15.817882986s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-768931 logs -n 25
E0804 10:09:07.710738 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubenet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:10.272169 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubenet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:10.813344 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/bridge-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-768931 logs -n 25: (5.87750969s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                          ARGS                                                                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ delete  │ -p bridge-561540                                                                                                                                                                                                                                       │ bridge-561540     │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat docker --no-pager                                                                                                                                                                                                 │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                     │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo docker system info                                                                                                                                                                                                              │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                        │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                  │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cri-dockerd --version                                                                                                                                                                                                           │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat containerd --no-pager                                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                      │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /etc/containerd/config.toml                                                                                                                                                                                                 │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo containerd config dump                                                                                                                                                                                                          │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                   │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │                     │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat crio --no-pager                                                                                                                                                                                                   │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                         │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo crio config                                                                                                                                                                                                                     │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ delete  │ -p kubenet-561540                                                                                                                                                                                                                                      │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ stop    │ -p newest-cni-768931 --alsologtostderr -v=3                                                                                                                                                                                                            │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ addons  │ enable dashboard -p newest-cni-768931 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                           │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ start   │ -p newest-cni-768931 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0 │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │                     │
	│ image   │ newest-cni-768931 image list --format=json                                                                                                                                                                                                             │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:08 UTC │ 04 Aug 25 10:08 UTC │
	│ pause   │ -p newest-cni-768931 --alsologtostderr -v=1                                                                                                                                                                                                            │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:08 UTC │ 04 Aug 25 10:08 UTC │
	│ unpause │ -p newest-cni-768931 --alsologtostderr -v=1                                                                                                                                                                                                            │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:08 UTC │ 04 Aug 25 10:08 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 10:04:32
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 10:04:32.687485 2163332 out.go:345] Setting OutFile to fd 1 ...
	I0804 10:04:32.687601 2163332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 10:04:32.687610 2163332 out.go:358] Setting ErrFile to fd 2...
	I0804 10:04:32.687614 2163332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 10:04:32.687787 2163332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 10:04:32.688302 2163332 out.go:352] Setting JSON to false
	I0804 10:04:32.689384 2163332 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":153962,"bootTime":1754147911,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 10:04:32.689473 2163332 start.go:140] virtualization: kvm guest
	I0804 10:04:32.691276 2163332 out.go:177] * [newest-cni-768931] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 10:04:32.692852 2163332 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 10:04:32.692888 2163332 notify.go:220] Checking for updates...
	I0804 10:04:32.695015 2163332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 10:04:32.696142 2163332 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:32.697215 2163332 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 10:04:32.698321 2163332 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 10:04:32.699270 2163332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 10:04:32.700616 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:32.701052 2163332 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 10:04:32.723805 2163332 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 10:04:32.723883 2163332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 10:04:32.778232 2163332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 10:04:32.768372933 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 10:04:32.778341 2163332 docker.go:318] overlay module found
	I0804 10:04:32.779801 2163332 out.go:177] * Using the docker driver based on existing profile
	I0804 10:04:32.780788 2163332 start.go:304] selected driver: docker
	I0804 10:04:32.780822 2163332 start.go:918] validating driver "docker" against &{Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:32.780895 2163332 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 10:04:32.781839 2163332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 10:04:32.827839 2163332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 10:04:32.819484271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 10:04:32.828202 2163332 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0804 10:04:32.828229 2163332 cni.go:84] Creating CNI manager for ""
	I0804 10:04:32.828284 2163332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 10:04:32.828323 2163332 start.go:348] cluster config:
	{Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:32.830455 2163332 out.go:177] * Starting "newest-cni-768931" primary control-plane node in "newest-cni-768931" cluster
	I0804 10:04:32.831301 2163332 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 10:04:32.832264 2163332 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 10:04:32.833160 2163332 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 10:04:32.833198 2163332 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 10:04:32.833213 2163332 cache.go:56] Caching tarball of preloaded images
	I0804 10:04:32.833291 2163332 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 10:04:32.833335 2163332 preload.go:172] Found /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 10:04:32.833346 2163332 cache.go:59] Finished verifying existence of preloaded tar for v1.34.0-beta.0 on docker
	I0804 10:04:32.833466 2163332 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/config.json ...
	I0804 10:04:32.853043 2163332 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 10:04:32.853066 2163332 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 10:04:32.853089 2163332 cache.go:230] Successfully downloaded all kic artifacts
	I0804 10:04:32.853130 2163332 start.go:360] acquireMachinesLock for newest-cni-768931: {Name:mk60747b86b31a8b440009760f939cd98b70b1b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 10:04:32.853200 2163332 start.go:364] duration metric: took 46.728µs to acquireMachinesLock for "newest-cni-768931"
	I0804 10:04:32.853224 2163332 start.go:96] Skipping create...Using existing machine configuration
	I0804 10:04:32.853234 2163332 fix.go:54] fixHost starting: 
	I0804 10:04:32.853483 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:32.870192 2163332 fix.go:112] recreateIfNeeded on newest-cni-768931: state=Stopped err=<nil>
	W0804 10:04:32.870218 2163332 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 10:04:32.871722 2163332 out.go:177] * Restarting existing docker container for "newest-cni-768931" ...
	W0804 10:04:33.885027 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:04:32.872698 2163332 cli_runner.go:164] Run: docker start newest-cni-768931
	I0804 10:04:33.099718 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:33.118449 2163332 kic.go:430] container "newest-cni-768931" state is running.
	I0804 10:04:33.118905 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:33.137343 2163332 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/config.json ...
	I0804 10:04:33.137542 2163332 machine.go:93] provisionDockerMachine start ...
	I0804 10:04:33.137597 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:33.155160 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:33.155419 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:33.155437 2163332 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 10:04:33.156072 2163332 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58734->127.0.0.1:33169: read: connection reset by peer
	I0804 10:04:36.284896 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-768931
	
	I0804 10:04:36.284952 2163332 ubuntu.go:169] provisioning hostname "newest-cni-768931"
	I0804 10:04:36.285030 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.302808 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.303033 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.303047 2163332 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-768931 && echo "newest-cni-768931" | sudo tee /etc/hostname
	I0804 10:04:36.436070 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-768931
	
	I0804 10:04:36.436155 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.453360 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.453580 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.453597 2163332 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-768931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-768931/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-768931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 10:04:36.577177 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 10:04:36.577204 2163332 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 10:04:36.577269 2163332 ubuntu.go:177] setting up certificates
	I0804 10:04:36.577284 2163332 provision.go:84] configureAuth start
	I0804 10:04:36.577338 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:36.594945 2163332 provision.go:143] copyHostCerts
	I0804 10:04:36.595024 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 10:04:36.595052 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 10:04:36.595122 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 10:04:36.595229 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 10:04:36.595240 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 10:04:36.595279 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 10:04:36.595353 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 10:04:36.595363 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 10:04:36.595397 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 10:04:36.595465 2163332 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.newest-cni-768931 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-768931]
	I0804 10:04:36.675231 2163332 provision.go:177] copyRemoteCerts
	I0804 10:04:36.675299 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 10:04:36.675408 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.693281 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:36.786243 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 10:04:36.808201 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 10:04:36.829564 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 10:04:36.851320 2163332 provision.go:87] duration metric: took 274.022098ms to configureAuth
	I0804 10:04:36.851348 2163332 ubuntu.go:193] setting minikube options for container-runtime
	I0804 10:04:36.851551 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:36.851596 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.868506 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.868714 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.868725 2163332 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 10:04:36.993642 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 10:04:36.993669 2163332 ubuntu.go:71] root file system type: overlay
	I0804 10:04:36.993814 2163332 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 10:04:36.993894 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.011512 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:37.011804 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:37.011909 2163332 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 10:04:37.144143 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 10:04:37.144254 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.163872 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:37.164133 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:37.164159 2163332 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 10:04:37.294409 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 10:04:37.294438 2163332 machine.go:96] duration metric: took 4.156880869s to provisionDockerMachine
	I0804 10:04:37.294451 2163332 start.go:293] postStartSetup for "newest-cni-768931" (driver="docker")
	I0804 10:04:37.294467 2163332 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 10:04:37.294538 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 10:04:37.294594 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.312083 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.402431 2163332 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 10:04:37.405677 2163332 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 10:04:37.405711 2163332 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 10:04:37.405722 2163332 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 10:04:37.405732 2163332 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 10:04:37.405748 2163332 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 10:04:37.405809 2163332 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 10:04:37.405901 2163332 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 10:04:37.406013 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 10:04:37.414129 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 10:04:37.436137 2163332 start.go:296] duration metric: took 141.67054ms for postStartSetup
	I0804 10:04:37.436224 2163332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 10:04:37.436265 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.453687 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.541885 2163332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 10:04:37.546057 2163332 fix.go:56] duration metric: took 4.692814355s for fixHost
	I0804 10:04:37.546084 2163332 start.go:83] releasing machines lock for "newest-cni-768931", held for 4.692869693s
	I0804 10:04:37.546159 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:37.563070 2163332 ssh_runner.go:195] Run: cat /version.json
	I0804 10:04:37.563126 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.563138 2163332 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 10:04:37.563203 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.580936 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.581156 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.740866 2163332 ssh_runner.go:195] Run: systemctl --version
	I0804 10:04:37.745223 2163332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 10:04:37.749326 2163332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 10:04:37.766095 2163332 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 10:04:37.766176 2163332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 10:04:37.773788 2163332 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 10:04:37.773820 2163332 start.go:495] detecting cgroup driver to use...
	I0804 10:04:37.773849 2163332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 10:04:37.773948 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 10:04:37.788117 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:38.201785 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 10:04:38.211955 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 10:04:38.221176 2163332 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 10:04:38.221223 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 10:04:38.230298 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 10:04:38.238908 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 10:04:38.247614 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 10:04:38.256328 2163332 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 10:04:38.264446 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 10:04:38.273173 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 10:04:38.282132 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 10:04:38.290867 2163332 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 10:04:38.298323 2163332 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 10:04:38.305902 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:38.392109 2163332 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 10:04:38.481905 2163332 start.go:495] detecting cgroup driver to use...
	I0804 10:04:38.481959 2163332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 10:04:38.482006 2163332 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 10:04:38.492886 2163332 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 10:04:38.492964 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 10:04:38.507193 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 10:04:38.524383 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:38.965725 2163332 ssh_runner.go:195] Run: which cri-dockerd
	I0804 10:04:38.969614 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 10:04:38.977908 2163332 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 10:04:38.993935 2163332 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 10:04:39.070708 2163332 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 10:04:39.151070 2163332 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 10:04:39.151179 2163332 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 10:04:39.167734 2163332 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 10:04:39.179347 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.254327 2163332 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 10:04:39.556127 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 10:04:39.566948 2163332 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0804 10:04:39.577711 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 10:04:39.587256 2163332 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 10:04:39.666843 2163332 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 10:04:39.760652 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.840823 2163332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 10:04:39.853363 2163332 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 10:04:39.863091 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.939093 2163332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 10:04:39.998099 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 10:04:40.009070 2163332 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 10:04:40.009141 2163332 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 10:04:40.012496 2163332 start.go:563] Will wait 60s for crictl version
	I0804 10:04:40.012547 2163332 ssh_runner.go:195] Run: which crictl
	I0804 10:04:40.015480 2163332 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 10:04:40.047607 2163332 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 10:04:40.047667 2163332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 10:04:40.071117 2163332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 10:04:40.096346 2163332 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 10:04:40.096430 2163332 cli_runner.go:164] Run: docker network inspect newest-cni-768931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 10:04:40.113799 2163332 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0804 10:04:40.117316 2163332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 10:04:40.128718 2163332 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0804 10:04:40.129838 2163332 kubeadm.go:875] updating cluster {Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 10:04:40.130050 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:40.510582 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:40.900777 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:41.302831 2163332 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 10:04:41.303034 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:41.705389 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:42.114511 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:42.516831 2163332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 10:04:42.537600 2163332 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 10:04:42.537629 2163332 docker.go:633] Images already preloaded, skipping extraction
	I0804 10:04:42.537693 2163332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 10:04:42.556805 2163332 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 10:04:42.556830 2163332 cache_images.go:85] Images are preloaded, skipping loading
	I0804 10:04:42.556843 2163332 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0-beta.0 docker true true} ...
	I0804 10:04:42.556981 2163332 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-768931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 10:04:42.557048 2163332 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 10:04:42.603960 2163332 cni.go:84] Creating CNI manager for ""
	I0804 10:04:42.603991 2163332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 10:04:42.604000 2163332 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0804 10:04:42.604024 2163332 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-768931 NodeName:newest-cni-768931 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 10:04:42.604182 2163332 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-768931"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 10:04:42.604258 2163332 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 10:04:42.612607 2163332 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 10:04:42.612659 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 10:04:42.620777 2163332 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0804 10:04:42.637111 2163332 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 10:04:42.652929 2163332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2300 bytes)
	I0804 10:04:42.669016 2163332 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0804 10:04:42.672189 2163332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 10:04:42.681993 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:42.752820 2163332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 10:04:42.766032 2163332 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931 for IP: 192.168.76.2
	I0804 10:04:42.766057 2163332 certs.go:194] generating shared ca certs ...
	I0804 10:04:42.766079 2163332 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:42.766266 2163332 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 10:04:42.766336 2163332 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 10:04:42.766352 2163332 certs.go:256] generating profile certs ...
	I0804 10:04:42.766461 2163332 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/client.key
	I0804 10:04:42.766532 2163332 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key.a5c16e02
	I0804 10:04:42.766586 2163332 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.key
	I0804 10:04:42.766711 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 10:04:42.766752 2163332 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 10:04:42.766766 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 10:04:42.766803 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 10:04:42.766837 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 10:04:42.766912 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 10:04:42.766983 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 10:04:42.767635 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 10:04:42.790829 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 10:04:42.814436 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 10:04:42.873985 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 10:04:42.962257 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 10:04:42.987204 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 10:04:43.010504 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 10:04:43.032579 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 10:04:43.054052 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 10:04:43.074805 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 10:04:43.095457 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 10:04:43.116289 2163332 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 10:04:43.132026 2163332 ssh_runner.go:195] Run: openssl version
	I0804 10:04:43.137020 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 10:04:43.145170 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.148316 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.148363 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.154461 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 10:04:43.162454 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 10:04:43.170868 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.174158 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.174205 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.180335 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 10:04:43.188046 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 10:04:43.196142 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.199374 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.199418 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.205534 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 10:04:43.213018 2163332 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 10:04:43.215961 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 10:04:43.221714 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 10:04:43.227380 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 10:04:43.233506 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 10:04:43.239207 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 10:04:43.245036 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 10:04:43.250834 2163332 kubeadm.go:392] StartCluster: {Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:43.250956 2163332 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 10:04:43.269121 2163332 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 10:04:43.277263 2163332 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 10:04:43.277283 2163332 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0804 10:04:43.277330 2163332 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 10:04:43.285660 2163332 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 10:04:43.286263 2163332 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-768931" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:43.286552 2163332 kubeconfig.go:62] /home/jenkins/minikube-integration/21223-1578987/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-768931" cluster setting kubeconfig missing "newest-cni-768931" context setting]
	I0804 10:04:43.286984 2163332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.288423 2163332 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 10:04:43.298821 2163332 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0804 10:04:43.298859 2163332 kubeadm.go:593] duration metric: took 21.569333ms to restartPrimaryControlPlane
	I0804 10:04:43.298870 2163332 kubeadm.go:394] duration metric: took 48.062594ms to StartCluster
	I0804 10:04:43.298890 2163332 settings.go:142] acquiring lock: {Name:mk3d97f9903fe59355ed92bb92489c9b9834574a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.298958 2163332 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:43.300110 2163332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.300900 2163332 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 10:04:43.300973 2163332 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 10:04:43.301073 2163332 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-768931"
	I0804 10:04:43.301106 2163332 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-768931"
	I0804 10:04:43.301136 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:43.301159 2163332 addons.go:69] Setting dashboard=true in profile "newest-cni-768931"
	I0804 10:04:43.301172 2163332 addons.go:238] Setting addon dashboard=true in "newest-cni-768931"
	W0804 10:04:43.301179 2163332 addons.go:247] addon dashboard should already be in state true
	I0804 10:04:43.301151 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.301204 2163332 addons.go:69] Setting default-storageclass=true in profile "newest-cni-768931"
	I0804 10:04:43.301216 2163332 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-768931"
	I0804 10:04:43.301196 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.301557 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.301866 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.302384 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.303179 2163332 out.go:177] * Verifying Kubernetes components...
	I0804 10:04:43.305197 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:43.324564 2163332 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 10:04:43.325432 2163332 addons.go:238] Setting addon default-storageclass=true in "newest-cni-768931"
	I0804 10:04:43.325477 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.325866 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.326227 2163332 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:43.326249 2163332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 10:04:43.326263 2163332 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0804 10:04:43.326303 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.330702 2163332 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	W0804 10:04:43.886614 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:04:43.332193 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0804 10:04:43.332226 2163332 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0804 10:04:43.332289 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.352412 2163332 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:43.352439 2163332 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 10:04:43.352511 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.354098 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.357876 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.376872 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.566637 2163332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 10:04:43.579924 2163332 api_server.go:52] waiting for apiserver process to appear ...
	I0804 10:04:43.580007 2163332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 10:04:43.587036 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:43.661862 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:43.763049 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0804 10:04:43.763163 2163332 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0804 10:04:43.788243 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0804 10:04:43.788319 2163332 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W0804 10:04:43.865293 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.865365 2163332 retry.go:31] will retry after 305.419917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.872538 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0804 10:04:43.872570 2163332 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0804 10:04:43.875393 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.875428 2163332 retry.go:31] will retry after 145.860796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.893731 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0804 10:04:43.893755 2163332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0804 10:04:43.974563 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0804 10:04:43.974597 2163332 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0804 10:04:44.022021 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:44.068260 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0804 10:04:44.068309 2163332 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0804 10:04:44.080910 2163332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 10:04:44.164887 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0804 10:04:44.164970 2163332 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0804 10:04:44.171091 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:44.277704 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0804 10:04:44.277741 2163332 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0804 10:04:44.368026 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:44.368071 2163332 retry.go:31] will retry after 204.750775ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:44.368122 2163332 api_server.go:72] duration metric: took 1.067187806s to wait for apiserver process to appear ...
	I0804 10:04:44.368138 2163332 api_server.go:88] waiting for apiserver healthz status ...
	I0804 10:04:44.368158 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:44.368545 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:04:44.383288 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:04:44.383317 2163332 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0804 10:04:44.480138 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:04:44.573381 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:44.869120 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:45.817807 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (21.02485888s)
	W0804 10:04:45.817865 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47830->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817882 2149628 retry.go:31] will retry after 7.331884675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47830->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817886 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (18.577242103s)
	W0804 10:04:45.817921 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47842->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817941 2149628 retry.go:31] will retry after 8.626487085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47842->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.819147 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (15.673641591s)
	W0804 10:04:45.819203 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.819221 2149628 retry.go:31] will retry after 10.775617277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:46.383837 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:04:48.883614 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:49.869344 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:49.869418 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:04:51.383255 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:53.150556 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:04:53.202901 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:53.202938 2149628 retry.go:31] will retry after 10.556999875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:53.383788 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:54.445142 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:04:54.496071 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:54.496106 2149628 retry.go:31] will retry after 19.784775984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:55.384040 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:54.871144 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:54.871202 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:56.595610 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:04:56.648210 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:56.648246 2149628 retry.go:31] will retry after 19.28607151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:57.883186 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:04:59.883484 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:59.871849 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:59.871895 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:05:02.383555 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:03.761004 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:03.814105 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:03.814138 2149628 retry.go:31] will retry after 18.372442886s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:04.883286 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:04.478042 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (20.306910761s)
	W0804 10:05:04.478091 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.478126 2163332 retry.go:31] will retry after 410.995492ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.672813 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (20.192633915s)
	W0804 10:05:04.672867 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.672888 2163332 retry.go:31] will retry after 182.584114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.703068 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (20.129638597s)
	W0804 10:05:04.703115 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.703134 2163332 retry.go:31] will retry after 523.614331ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.856484 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:04.872959 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:04.873004 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:04.889864 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:05.192954 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:37594->192.168.76.2:8443: read: connection reset by peer
	I0804 10:05:05.227229 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:05:05.369063 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:05.369560 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:05.868214 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:05.868705 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:06.201020 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.344463633s)
	W0804 10:05:06.201082 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201113 2163332 retry.go:31] will retry after 482.284125ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201118 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.311218695s)
	W0804 10:05:06.201165 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:06.201186 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201211 2163332 retry.go:31] will retry after 887.479058ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201194 2163332 retry.go:31] will retry after 435.691438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.368292 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:06.368825 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:06.637302 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:06.683768 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:06.697149 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.697200 2163332 retry.go:31] will retry after 912.303037ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:06.737524 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.737566 2163332 retry.go:31] will retry after 625.926598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.868554 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:06.869018 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:07.089442 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:07.144156 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.144195 2163332 retry.go:31] will retry after 785.129731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.364509 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:07.368843 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:07.369217 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:07.420384 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.420426 2163332 retry.go:31] will retry after 1.204230636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.610548 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:07.663536 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.663566 2163332 retry.go:31] will retry after 847.493782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:07.384053 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:07.868944 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:07.869396 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:07.929533 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:07.992350 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.992381 2163332 retry.go:31] will retry after 1.598370768s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.368829 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:08.369322 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:08.511490 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:08.563819 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.563859 2163332 retry.go:31] will retry after 2.394822068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.625020 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:08.680531 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.680572 2163332 retry.go:31] will retry after 1.418436203s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.868633 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:08.869103 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:09.368624 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:09.369142 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:09.591529 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:09.645331 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:09.645367 2163332 retry.go:31] will retry after 3.361261664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:09.868611 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:09.869088 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.099510 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:10.154439 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:10.154474 2163332 retry.go:31] will retry after 1.332951383s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:10.368786 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:10.369300 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.869015 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:10.869515 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.959750 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:11.011704 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.011736 2163332 retry.go:31] will retry after 3.283196074s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.369218 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:11.369738 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:11.487993 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:11.543582 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.543631 2163332 retry.go:31] will retry after 1.836854478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.869009 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:11.869527 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:12.369134 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:12.369608 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.284114 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:05:12.868285 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:12.868757 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:13.007033 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:13.060825 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.060859 2163332 retry.go:31] will retry after 5.419314165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.368273 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:13.368846 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:13.381071 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:13.436653 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.436740 2163332 retry.go:31] will retry after 4.903205255s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.869165 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:13.869693 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.295170 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:14.348620 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:14.348654 2163332 retry.go:31] will retry after 3.265872015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:14.368685 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:14.369071 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.868586 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:14.869001 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:15.368516 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:15.368980 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:15.868561 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:15.869023 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:16.368523 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:16.368989 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:16.868494 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:16.868945 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:17.368464 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:17.368952 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:17.615361 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:17.669075 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:17.669112 2163332 retry.go:31] will retry after 4.169004534s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:15.935132 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:17.885492 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:05:17.868530 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:17.869032 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:18.340601 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:18.368999 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:18.369438 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:18.395142 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.395177 2163332 retry.go:31] will retry after 4.503631797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.480301 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:18.532269 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.532303 2163332 retry.go:31] will retry after 6.221358918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.868632 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:18.869050 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:19.368539 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:19.369007 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:19.868600 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:19.869064 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:20.368560 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:20.369023 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:20.868636 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:20.869103 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:21.368674 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:21.369151 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:21.838756 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:21.869088 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:21.869590 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:21.892280 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:21.892309 2163332 retry.go:31] will retry after 7.287119503s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:22.368833 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:22.369350 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:22.187953 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:22.869045 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:22.869518 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:22.899745 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:22.973354 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:22.973440 2163332 retry.go:31] will retry after 5.491383729s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:23.368948 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:24.754708 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:27.887543 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:05:29.439408 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (15.15524051s)
	W0804 10:05:29.439455 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45456->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:29.439566 2149628 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45456->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:05:29.441507 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (13.506331682s)
	W0804 10:05:29.441560 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:29.441583 2149628 retry.go:31] will retry after 14.271169565s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:29.441585 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.253590877s)
	W0804 10:05:29.441617 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:29.441700 2149628 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W0804 10:05:30.383305 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:28.370244 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:28.370296 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:28.465977 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:29.179675 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:32.383952 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:34.883276 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:33.371314 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:33.371380 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:05:36.883454 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:38.883897 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:38.372462 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:38.372528 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:05:41.383199 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:43.713667 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:43.766398 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:43.766528 2149628 out.go:270] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:05:43.769126 2149628 out.go:177] * Enabled addons: 
	I0804 10:05:43.770026 2149628 addons.go:514] duration metric: took 1m58.647363457s for enable addons: enabled=[]
	W0804 10:05:43.883892 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:43.373289 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:43.373454 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:44.936710 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (20.181960154s)
	W0804 10:05:44.936754 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52098->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.936774 2163332 retry.go:31] will retry after 12.603121969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52098->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939850 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (16.473803888s)
	I0804 10:05:44.939875 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (15.760161568s)
	W0804 10:05:44.939908 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52114->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:44.939909 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939927 2163332 ssh_runner.go:235] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (1.566452819s)
	I0804 10:05:44.939927 2163332 retry.go:31] will retry after 11.974707637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52114->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939942 2163332 retry.go:31] will retry after 10.364414585s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939952 2163332 logs.go:282] 2 containers: [649f5e5c295c 059756d38779]
	I0804 10:05:44.940008 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:44.959696 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:44.959763 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:44.981336 2163332 logs.go:282] 0 containers: []
	W0804 10:05:44.981364 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:44.981422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:45.001103 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:45.001170 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:45.019261 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.019295 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:45.019341 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:45.037700 2163332 logs.go:282] 2 containers: [69f71bfef17b e3a6308944b3]
	I0804 10:05:45.037776 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:45.055759 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.055792 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:45.055847 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:45.073894 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.073922 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:45.073935 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:45.073949 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:45.129417 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:45.122097    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.122637    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124224    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124675    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.126118    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:45.122097    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.122637    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124224    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124675    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.126118    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:45.129437 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:45.129450 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:45.156907 2163332 logs.go:123] Gathering logs for kube-apiserver [059756d38779] ...
	I0804 10:05:45.156940 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059756d38779"
	W0804 10:05:45.175729 2163332 logs.go:130] failed kube-apiserver [059756d38779]: command: /bin/bash -c "docker logs --tail 400 059756d38779" /bin/bash -c "docker logs --tail 400 059756d38779": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 059756d38779
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 059756d38779
	
	** /stderr **
	I0804 10:05:45.175748 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:45.175765 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:45.195944 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:45.195970 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:45.215671 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:45.215703 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:45.256918 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:45.256951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:45.283079 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:45.283122 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:45.318677 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:45.318712 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:45.370577 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:45.370621 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:45.391591 2163332 logs.go:123] Gathering logs for kube-controller-manager [e3a6308944b3] ...
	I0804 10:05:45.391616 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a6308944b3"
	I0804 10:05:45.412276 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:45.412300 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 10:05:46.384002 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:48.883850 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:47.962390 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:47.962840 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:47.962936 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:47.981464 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:47.981534 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:47.999231 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:47.999296 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:48.017739 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.017764 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:48.017806 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:48.036069 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:48.036151 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:48.053625 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.053651 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:48.053706 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:48.072069 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:48.072161 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:48.089963 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.089985 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:48.090033 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:48.107912 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.107934 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:48.107956 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:48.107972 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:48.164032 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:48.156591    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.157104    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.158718    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.159117    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.160609    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:48.156591    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.157104    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.158718    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.159117    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.160609    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:48.164052 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:48.164068 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:48.189481 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:48.189509 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:48.223302 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:48.223340 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:48.243043 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:48.243072 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:48.279568 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:48.279605 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:48.305730 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:48.305759 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:48.326737 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:48.326763 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:48.376057 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:48.376092 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:48.397266 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:48.397297 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:50.949382 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:50.949902 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:50.950009 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:50.969779 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:50.969854 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:50.988509 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:50.988586 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:51.006536 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.006565 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:51.006613 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:51.024853 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:51.024921 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:51.042617 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.042645 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:51.042689 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:51.060511 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:51.060599 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:51.079005 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.079031 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:51.079092 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:51.096451 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.096474 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:51.096489 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:51.096500 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:51.152017 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:51.152057 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:51.202478 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:51.202527 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:51.224042 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:51.224069 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:51.244633 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:51.244664 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:51.263948 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:51.263981 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:51.300099 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:51.300130 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:51.327538 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:51.327568 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:51.383029 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:51.375959    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.376437    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.377941    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.378408    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.379910    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:51.375959    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.376437    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.377941    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.378408    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.379910    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:51.383051 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:51.383067 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:51.408284 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:51.408314 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	W0804 10:05:51.384023 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:53.883929 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:53.941653 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:53.942148 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:53.942243 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:53.961471 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:53.961551 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:53.979438 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:53.979526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:53.997538 2163332 logs.go:282] 0 containers: []
	W0804 10:05:53.997559 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:53.997604 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:54.016326 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:54.016411 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:54.033583 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.033612 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:54.033663 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:54.051020 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:54.051103 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:54.068091 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.068118 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:54.068166 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:54.085797 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.085822 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:54.085842 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:54.085855 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:54.111832 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:54.111861 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:54.137672 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:54.137701 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:54.158028 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:54.158058 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:54.212546 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:54.212579 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:54.231855 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:54.231886 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:54.282575 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:54.282614 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:54.338570 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:54.331379    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.331842    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333378    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333781    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.335263    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:54.331379    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.331842    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333378    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333781    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.335263    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:54.338591 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:54.338604 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:54.373298 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:54.373329 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:54.393825 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:54.393848 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:55.304830 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:55.358381 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:55.358414 2163332 retry.go:31] will retry after 25.619477771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.915875 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:56.931223 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:56.931695 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:56.931788 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W0804 10:05:56.971520 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.971555 2163332 retry.go:31] will retry after 22.721182959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.971565 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:56.971637 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:56.989778 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:56.989869 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:57.007294 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.007316 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:57.007359 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:57.024882 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:57.024964 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:57.042858 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.042881 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:57.042935 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:57.061232 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:57.061331 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:57.078841 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.078870 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:57.078919 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:57.096724 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.096754 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:57.096778 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:57.096790 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:57.150588 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:57.150621 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:57.176804 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:57.176833 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:57.233732 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:57.225639    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.226657    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228215    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228620    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.230079    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:57.225639    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.226657    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228215    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228620    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.230079    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:57.233755 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:57.233768 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:57.270073 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:57.270109 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:57.290426 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:57.290461 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:57.327258 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:57.327286 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:57.353115 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:57.353143 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:57.373360 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:57.373392 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:57.423101 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:57.423133 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:57.540679 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:57.593367 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:57.593411 2163332 retry.go:31] will retry after 18.437511284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:55.884024 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:58.383443 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:59.945876 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:59.946354 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:59.946446 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:59.966005 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:59.966091 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:59.985617 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:59.985701 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:00.004828 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.004855 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:00.004906 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:00.023587 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:00.023651 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:00.041659 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.041680 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:00.041727 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:00.059493 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:00.059562 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:00.076712 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.076736 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:00.076779 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:00.095203 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.095222 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:00.095237 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:00.095248 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:00.113747 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:00.113775 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:00.150407 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:00.150433 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:00.202445 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:00.202486 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:00.229719 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:00.229755 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:00.255849 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:00.255878 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:00.276091 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:00.276119 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:00.297957 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:00.297986 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:00.353933 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:00.346687    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.347273    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.348805    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.349306    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.350820    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:00.346687    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.347273    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.348805    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.349306    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.350820    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:00.353953 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:00.353968 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:00.390814 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:00.390846 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	W0804 10:06:00.883216 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:03.383100 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:05.383181 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:02.945900 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:02.946356 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:02.946453 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:02.965471 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:06:02.965535 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:02.983934 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:06:02.984001 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:03.002213 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.002237 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:03.002285 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:03.021772 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:03.021856 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:03.039529 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.039554 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:03.039612 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:03.057939 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:03.058004 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:03.076289 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.076310 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:03.076355 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:03.094117 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.094146 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:03.094167 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:03.094182 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:03.130756 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:03.130783 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:03.187120 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:03.179355    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.179917    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181530    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181944    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.183460    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:03.179355    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.179917    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181530    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181944    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.183460    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:03.187140 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:03.187153 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:03.207770 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:03.207804 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:03.244606 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:03.244642 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:03.295650 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:03.295686 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:03.351809 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:03.351844 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:03.379889 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:03.379922 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:03.406739 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:03.406767 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:03.427941 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:03.427967 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:05.948009 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:05.948483 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:05.948575 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:05.967373 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:06:05.967442 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:05.985899 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:06:05.985979 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:06.004170 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.004194 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:06.004250 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:06.022314 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:06.022386 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:06.039940 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.039963 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:06.040005 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:06.058068 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:06.058144 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:06.076569 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.076591 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:06.076631 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:06.094127 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.094153 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:06.094179 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:06.094193 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:06.119164 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:06.119195 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:06.140482 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:06.140517 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:06.190516 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:06.190551 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:06.212353 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:06.212385 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:06.248893 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:06.248919 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:06.302627 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:06.302664 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:06.329602 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:06.329633 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:06.385087 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:06.377651    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.378359    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.379718    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.380186    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.381710    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:06.377651    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.378359    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.379718    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.380186    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.381710    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:06.385113 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:06.385131 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:06.421810 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:06.421843 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:06:07.384103 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:09.883971 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:08.941210 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:06:11.884134 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:14.383873 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:13.941780 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:06:13.941906 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:13.960880 2163332 logs.go:282] 2 containers: [806e7ebaaed1 649f5e5c295c]
	I0804 10:06:13.960962 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:13.979358 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:13.979441 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:13.996946 2163332 logs.go:282] 0 containers: []
	W0804 10:06:13.996972 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:13.997025 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:14.015595 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:14.015668 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:14.034223 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.034246 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:14.034288 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:14.052124 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:14.052200 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:14.069965 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.069989 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:14.070032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:14.088436 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.088459 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:14.088473 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:14.088503 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:14.146648 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:14.146701 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:14.173008 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:14.173051 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 10:06:16.031588 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:06:16.384007 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:19.693397 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:06:20.978525 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:06:28.857368 2163332 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (14.684287631s)
	W0804 10:06:28.857442 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:24.221601    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:06:28.850442    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49502->[::1]:8443: read: connection reset by peer"
	E0804 10:06:28.851023    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.852675    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.853078    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:24.221601    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:06:28.850442    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49502->[::1]:8443: read: connection reset by peer"
	E0804 10:06:28.851023    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.852675    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.853078    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:28.857455 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:28.857466 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:28.857477 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.825848081s)
	W0804 10:06:28.857515 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49512->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:06:28.857580 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.164140796s)
	W0804 10:06:28.857620 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:06:28.857662 2163332 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49512->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W0804 10:06:28.857709 2163332 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:06:28.857875 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.879306724s)
	W0804 10:06:28.857914 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:06:28.857989 2163332 out.go:270] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:06:28.860496 2163332 out.go:177] * Enabled addons: 
	W0804 10:06:28.885498 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:06:28.861918 2163332 addons.go:514] duration metric: took 1m45.560958591s for enable addons: enabled=[]
	I0804 10:06:28.878501 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:28.878527 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:28.917388 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:28.917421 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:28.938499 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:28.938540 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:28.979902 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:28.979935 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:29.005867 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:29.005903 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	W0804 10:06:29.025877 2163332 logs.go:130] failed kube-apiserver [649f5e5c295c]: command: /bin/bash -c "docker logs --tail 400 649f5e5c295c" /bin/bash -c "docker logs --tail 400 649f5e5c295c": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 649f5e5c295c
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 649f5e5c295c
	
	** /stderr **
	I0804 10:06:29.025904 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:29.025916 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:29.076718 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:29.076759 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:31.597358 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:31.597799 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:31.597939 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:31.617008 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:31.617067 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:31.635937 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:31.636004 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:31.654450 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.654474 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:31.654531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:31.673162 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:31.673288 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:31.690681 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.690706 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:31.690759 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:31.712018 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:31.712111 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:31.729547 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.729576 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:31.729625 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:31.747479 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.747501 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:31.747513 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:31.747525 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:31.773882 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:31.773913 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:31.828620 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:31.821229    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.821688    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823253    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823731    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.825214    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:31.821229    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.821688    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823253    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823731    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.825214    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:31.828641 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:31.828655 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:31.854157 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:31.854190 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:31.873980 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:31.874004 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:31.910304 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:31.910342 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:31.931218 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:31.931246 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:31.969061 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:31.969091 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:32.019399 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:32.019436 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:32.040462 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:32.040488 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:32.059511 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:32.059540 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:34.622382 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:34.622843 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:34.622941 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:34.642832 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:34.642895 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:34.660588 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:34.660660 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:34.678855 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.678878 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:34.678922 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:34.698191 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:34.698282 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:34.716571 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.716593 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:34.716636 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:34.735252 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:34.735339 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:34.755152 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.755181 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:34.755230 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:34.773441 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.773472 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:34.773488 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:34.773500 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:34.793528 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:34.793556 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:34.812435 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:34.812465 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:34.837875 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:34.837905 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:34.858757 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:34.858786 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:34.878587 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:34.878614 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:34.916360 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:34.916391 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:34.982416 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:34.982452 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:35.039762 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:35.031976    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.032521    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034096    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034545    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.036090    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:35.031976    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.032521    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034096    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034545    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.036090    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:35.039782 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:35.039796 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:35.066299 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:35.066330 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:35.104670 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:35.104700 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:37.656360 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:37.656872 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:37.656969 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:37.675825 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:37.675894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	W0804 10:06:38.886603 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:06:37.694962 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:37.695023 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:37.712658 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.712684 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:37.712735 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:37.730728 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:37.730800 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:37.748576 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.748598 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:37.748640 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:37.767923 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:37.768007 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:37.785275 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.785298 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:37.785347 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:37.801999 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.802024 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:37.802055 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:37.802067 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:37.839050 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:37.839076 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:37.907098 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:37.907134 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:37.962875 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:37.955444    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.955922    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957526    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957895    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.959476    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:37.955444    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.955922    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957526    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957895    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.959476    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:37.962896 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:37.962916 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:37.988976 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:37.989004 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:38.011096 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:38.011124 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:38.049631 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:38.049661 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:38.102092 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:38.102126 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:38.124479 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:38.124506 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:38.144973 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:38.145000 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:38.170919 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:38.170951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:40.690387 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:40.690843 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:40.690940 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:40.710160 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:40.710230 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:40.727856 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:40.727940 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:40.745578 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.745605 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:40.745648 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:40.763453 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:40.763516 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:40.781764 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.781788 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:40.781839 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:40.799938 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:40.800013 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:40.817161 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.817187 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:40.817260 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:40.835239 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.835260 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:40.835279 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:40.835293 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:40.855149 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:40.855177 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:40.922877 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:40.922913 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:40.978296 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:40.970913    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.971466    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973009    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973412    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.974964    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:40.970913    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.971466    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973009    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973412    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.974964    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:40.978318 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:40.978339 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:41.004175 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:41.004205 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:41.025025 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:41.025053 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:41.061373 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:41.061413 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:41.087250 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:41.087278 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:41.107920 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:41.107947 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:41.148907 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:41.148937 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	W0804 10:06:41.383817 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:43.384045 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:43.699853 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:43.700314 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:43.700416 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:43.719695 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:43.719771 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:43.738313 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:43.738403 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:43.756507 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.756531 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:43.756574 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:43.775263 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:43.775363 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:43.793071 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.793109 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:43.793177 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:43.811134 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:43.811231 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:43.828955 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.828978 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:43.829038 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:43.847773 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.847793 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:43.847819 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:43.847831 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:43.873624 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:43.873653 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:43.894310 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:43.894337 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:43.945563 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:43.945599 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:43.966435 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:43.966465 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:43.984864 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:43.984889 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:44.024156 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:44.024192 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:44.060624 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:44.060652 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:44.125956 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:44.125999 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:44.152471 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:44.152508 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:44.207960 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:44.200436    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.200919    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202422    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202839    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.204356    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:44.200436    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.200919    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202422    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202839    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.204356    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:46.709332 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:46.709781 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:46.709868 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:46.729464 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:46.729567 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:46.748548 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:46.748644 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:46.766962 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.766986 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:46.767041 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:46.786525 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:46.786603 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:46.804285 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.804311 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:46.804360 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:46.822116 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:46.822209 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:46.839501 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.839530 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:46.839575 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:46.856689 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.856711 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:46.856728 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:46.856739 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:46.895336 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:46.895370 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:46.946627 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:46.946659 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:46.967302 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:46.967329 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:46.985945 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:46.985972 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:47.022376 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:47.022405 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:47.077558 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:47.069979    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.070438    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072002    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072443    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.074016    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:47.069979    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.070438    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072002    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072443    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.074016    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:47.077593 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:47.077609 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:47.097426 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:47.097453 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:47.160540 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:47.160577 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:47.186584 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:47.186612 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:06:45.883271 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:47.883345 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:49.883713 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:49.713880 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:49.714344 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:49.714431 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:49.732944 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:49.733002 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:49.751052 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:49.751129 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:49.769185 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.769207 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:49.769272 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:49.787184 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:49.787250 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:49.804791 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.804809 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:49.804849 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:49.823604 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:49.823673 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:49.840745 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.840766 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:49.840809 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:49.857681 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.857709 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:49.857729 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:49.857743 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:49.908402 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:49.908439 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:49.930280 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:49.930305 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:49.950867 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:49.950895 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:50.018519 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:50.018562 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:50.044619 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:50.044647 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:50.100753 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:50.092922    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.093459    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095094    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095578    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.097081    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:50.092922    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.093459    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095094    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095578    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.097081    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:50.100777 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:50.100793 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:50.125943 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:50.125970 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:50.146091 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:50.146117 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:50.181714 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:50.181742 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	W0804 10:06:52.383197 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:54.383379 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:52.721516 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:52.721956 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:52.722053 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:52.741758 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:52.741819 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:52.760560 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:52.760637 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:52.778049 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.778071 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:52.778133 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:52.796442 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:52.796515 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:52.813403 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.813433 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:52.813486 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:52.831370 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:52.831443 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:52.850355 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.850377 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:52.850418 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:52.868304 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.868329 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:52.868348 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:52.868362 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:52.909679 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:52.909712 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:52.959826 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:52.959860 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:52.980766 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:52.980792 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:53.000093 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:53.000123 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:53.066024 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:53.066063 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:53.122172 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:53.114825    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.115397    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.116943    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.117412    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.118938    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:53.114825    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.115397    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.116943    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.117412    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.118938    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:53.122200 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:53.122218 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:53.158613 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:53.158651 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:53.184392 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:53.184422 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:53.209845 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:53.209873 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:55.732938 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:55.733375 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:55.733476 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:55.752276 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:55.752356 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:55.770674 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:55.770750 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:55.788757 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.788778 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:55.788823 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:55.806924 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:55.806986 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:55.824084 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.824105 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:55.824163 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:55.842106 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:55.842195 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:55.859348 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.859376 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:55.859429 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:55.876943 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.876972 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:55.876990 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:55.877001 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:55.903338 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:55.903372 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:55.924802 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:55.924829 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:55.980125 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:55.972792    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.973342    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.974941    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.975429    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.976926    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:55.972792    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.973342    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.974941    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.975429    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.976926    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:55.980146 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:55.980161 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:56.000597 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:56.000622 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:56.037964 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:56.037996 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:56.088371 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:56.088407 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:56.107606 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:56.107634 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:56.143658 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:56.143689 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:56.211928 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:56.211963 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 10:06:56.383880 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:58.883846 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:58.738791 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:58.739253 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:58.739345 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:58.758672 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:58.758750 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:58.778125 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:58.778188 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:58.795601 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.795623 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:58.795675 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:58.814211 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:58.814275 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:58.831764 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.831790 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:58.831834 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:58.849466 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:58.849539 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:58.867398 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.867427 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:58.867484 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:58.885191 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.885215 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:58.885234 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:58.885262 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:58.911583 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:58.911610 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:58.950860 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:58.950893 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:59.004297 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:59.004333 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:59.025861 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:59.025889 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:59.046944 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:59.046973 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:59.085764 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:59.085794 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:59.158468 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:59.158508 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:59.184434 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:59.184462 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:59.239706 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:59.232043    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.232545    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234123    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234548    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.235973    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:59.232043    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.232545    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234123    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234548    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.235973    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:59.239735 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:59.239748 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:01.760780 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:01.761288 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:01.761386 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:01.781655 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:01.781741 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:01.799466 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:01.799533 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:01.817102 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.817126 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:01.817181 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:01.834957 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:01.835044 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:01.852872 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.852900 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:01.852951 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:01.870948 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:01.871014 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:01.890001 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.890026 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:01.890072 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:01.907730 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.907750 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:01.907767 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:01.907777 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:01.980222 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:01.980260 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:02.006847 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:02.006888 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:02.047297 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:02.047329 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:02.101227 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:02.101276 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:02.124099 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:02.124129 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:02.161273 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:02.161308 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:02.187147 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:02.187182 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:02.242852 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:02.235381    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.235858    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237451    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237924    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.239421    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:02.235381    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.235858    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237451    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237924    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.239421    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:02.242879 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:02.242893 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:02.264021 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:02.264048 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:07:01.383265 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:03.883186 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:04.785494 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:04.785952 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:04.786043 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:04.805356 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:04.805452 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:04.823966 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:04.824039 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:04.841949 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.841973 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:04.842019 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:04.859692 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:04.859761 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:04.877317 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.877341 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:04.877383 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:04.895958 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:04.896035 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:04.913348 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.913378 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:04.913426 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:04.931401 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.931427 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:04.931448 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:04.931461 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:04.951477 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:04.951507 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:05.001983 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:05.002019 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:05.023585 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:05.023619 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:05.044516 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:05.044549 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:05.113154 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:05.113195 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:05.170412 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:05.162898    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.163461    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165001    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165501    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.167026    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:05.162898    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.163461    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165001    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165501    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.167026    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:05.170434 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:05.170447 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:05.210151 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:05.210186 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:05.248755 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:05.248781 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:05.275317 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:05.275352 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:07:05.883315 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:07.884030 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:10.383933 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:07.801587 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:07.802063 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:07.802166 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:07.821137 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:07.821214 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:07.839463 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:07.839532 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:07.856871 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.856893 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:07.856938 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:07.875060 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:07.875136 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:07.896448 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.896477 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:07.896537 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:07.914334 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:07.914402 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:07.931616 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.931638 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:07.931680 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:07.950247 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.950268 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:07.950285 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:07.950295 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:07.974572 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:07.974603 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:07.994800 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:07.994827 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:08.013535 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:08.013565 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:08.048711 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:08.048738 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:08.075000 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:08.075029 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:08.095656 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:08.095681 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:08.135706 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:08.135742 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:08.189749 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:08.189780 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:08.264988 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:08.265028 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:08.321799 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:08.314236    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.314718    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316206    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316648    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.318128    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:08.314236    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.314718    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316206    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316648    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.318128    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:10.822388 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:10.822855 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:10.822962 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:10.842220 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:10.842299 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:10.860390 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:10.860467 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:10.878544 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.878567 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:10.878613 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:10.897953 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:10.898016 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:10.916393 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.916419 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:10.916474 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:10.933957 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:10.934052 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:10.951873 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.951901 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:10.951957 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:10.970046 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.970073 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:10.970101 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:10.970116 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:11.026141 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:11.018729    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.019305    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.020844    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.021228    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.022826    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:11.018729    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.019305    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.020844    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.021228    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.022826    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:11.026162 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:11.026174 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:11.052155 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:11.052183 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:11.091637 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:11.091670 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:11.142651 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:11.142684 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:11.164003 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:11.164034 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:11.200186 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:11.200214 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:11.270805 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:11.270846 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:11.297260 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:11.297295 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:11.318423 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:11.318449 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:07:12.883177 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:15.383259 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:13.838395 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:13.838840 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:13.838937 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:13.858880 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:13.858955 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:13.877417 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:13.877476 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:13.895850 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.895876 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:13.895919 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:13.914237 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:13.914304 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:13.932185 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.932214 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:13.932265 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:13.949806 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:13.949876 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:13.966753 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.966779 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:13.966837 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:13.984061 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.984080 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:13.984103 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:13.984118 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:14.024518 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:14.024551 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:14.075810 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:14.075839 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:14.096801 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:14.096835 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:14.134271 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:14.134298 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:14.210356 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:14.210398 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:14.266888 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:14.259329    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.259828    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.261517    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.262045    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.263609    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:14.259329    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.259828    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.261517    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.262045    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.263609    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:14.266911 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:14.266931 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:14.286729 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:14.286765 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:14.312819 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:14.312853 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:14.339716 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:14.339746 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:16.861870 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:16.862360 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:16.862459 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:16.882051 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:16.882134 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:16.900321 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:16.900401 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:16.917983 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.918006 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:16.918057 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:16.935570 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:16.935650 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:16.953434 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.953455 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:16.953497 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:16.971207 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:16.971281 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:16.989882 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.989911 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:16.989957 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:17.006985 2163332 logs.go:282] 0 containers: []
	W0804 10:07:17.007007 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:17.007022 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:17.007034 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:17.081700 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:17.081741 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:17.107769 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:17.107798 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:17.129048 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:17.129074 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:17.170571 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:17.170601 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:17.190971 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:17.191000 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:17.227194 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:17.227225 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:17.283198 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:17.275311    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.275794    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277411    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277858    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.279344    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:17.275311    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.275794    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277411    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277858    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.279344    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:17.283220 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:17.283236 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:17.309760 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:17.309789 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:17.358841 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:17.358871 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:07:17.383386 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:19.383988 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:19.880139 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:19.880622 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:19.880709 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:19.901098 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:19.901189 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:19.921388 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:19.921455 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:19.941720 2163332 logs.go:282] 0 containers: []
	W0804 10:07:19.941751 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:19.941808 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:19.963719 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:19.963807 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:19.982285 2163332 logs.go:282] 0 containers: []
	W0804 10:07:19.982315 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:19.982375 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:20.005165 2163332 logs.go:282] 2 containers: [db8e2ca87b17 5321aae275b7]
	I0804 10:07:20.005272 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:20.024272 2163332 logs.go:282] 0 containers: []
	W0804 10:07:20.024296 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:20.024349 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:20.066617 2163332 logs.go:282] 0 containers: []
	W0804 10:07:20.066648 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:20.066662 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:20.066674 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:21.883344 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:23.883950 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:26.383273 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:28.383629 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:30.384083 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:32.883295 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:34.883588 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:37.383240 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:39.383490 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:41.805018 2163332 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.738325489s)
	W0804 10:07:41.805054 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:30.119105    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:40.119975    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:41.799069    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:59078->[::1]:8443: read: connection reset by peer"
	E0804 10:07:41.799640    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:41.801276    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:30.119105    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:40.119975    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:41.799069    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:59078->[::1]:8443: read: connection reset by peer"
	E0804 10:07:41.799640    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:41.801276    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:41.805062 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:41.805073 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	W0804 10:07:41.824568 2163332 logs.go:130] failed etcd [62ad65a28324]: command: /bin/bash -c "docker logs --tail 400 62ad65a28324" /bin/bash -c "docker logs --tail 400 62ad65a28324": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 62ad65a28324
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 62ad65a28324
	
	** /stderr **
	I0804 10:07:41.824590 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:41.824606 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:41.866655 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:41.866687 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:41.918542 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:41.918580 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:41.940196 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:41.940228 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:41.980124 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:41.980151 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:07:41.999188 2163332 logs.go:130] failed kube-apiserver [806e7ebaaed1]: command: /bin/bash -c "docker logs --tail 400 806e7ebaaed1" /bin/bash -c "docker logs --tail 400 806e7ebaaed1": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 806e7ebaaed1
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 806e7ebaaed1
	
	** /stderr **
	I0804 10:07:41.999208 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:41.999222 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:42.021383 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:42.021413 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	W0804 10:07:42.040097 2163332 logs.go:130] failed kube-controller-manager [5321aae275b7]: command: /bin/bash -c "docker logs --tail 400 5321aae275b7" /bin/bash -c "docker logs --tail 400 5321aae275b7": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 5321aae275b7
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 5321aae275b7
	
	** /stderr **
	I0804 10:07:42.040121 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:42.040140 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:42.121467 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:42.121517 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 10:07:41.384132 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:43.883489 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:44.649035 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:44.649550 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:44.649655 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:44.668446 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:44.668531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:44.686095 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:44.686171 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:44.705643 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.705669 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:44.705736 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:44.724574 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:44.724643 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:44.743534 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.743556 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:44.743599 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:44.762338 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:44.762422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:44.782440 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.782464 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:44.782511 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:44.800457 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.800482 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:44.800503 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:44.800519 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:44.828987 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:44.829024 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:44.851349 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:44.851380 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:44.891887 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:44.891921 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:44.942771 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:44.942809 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:44.963910 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:44.963936 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:44.982991 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:44.983018 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:45.019697 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:45.019724 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:45.098143 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:45.098181 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:45.156899 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:45.149340    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.149889    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151529    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151954    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.153458    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:45.149340    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.149889    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151529    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151954    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.153458    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:45.156923 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:45.156936 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:47.685272 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:47.685730 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:47.685821 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W0804 10:07:45.884049 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:48.383460 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:50.384087 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:47.705698 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:47.705776 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:47.723486 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:47.723559 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:47.740254 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.740277 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:47.740328 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:47.758844 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:47.758912 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:47.776147 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.776169 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:47.776209 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:47.794049 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:47.794120 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:47.810872 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.810892 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:47.810933 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:47.828618 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.828639 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:47.828655 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:47.828665 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:47.884561 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:47.876612    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.877177    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.878713    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.879149    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.880641    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:47.876612    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.877177    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.878713    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.879149    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.880641    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:47.884591 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:47.884608 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:47.910602 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:47.910632 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:47.931635 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:47.931662 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:47.974664 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:47.974698 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:48.026673 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:48.026707 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:48.047596 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:48.047624 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:48.084322 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:48.084354 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:48.162716 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:48.162754 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:48.189072 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:48.189103 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:50.709307 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:50.709704 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:50.709797 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:50.728631 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:50.728711 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:50.747056 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:50.747128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:50.764837 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.764861 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:50.764907 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:50.783351 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:50.783422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:50.801048 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.801068 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:50.801112 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:50.819524 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:50.819605 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:50.837558 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.837583 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:50.837635 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:50.855272 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.855300 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:50.855315 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:50.855334 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:50.875612 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:50.875640 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:50.895850 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:50.895876 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:50.976003 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:50.976045 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:51.002688 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:51.002724 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:51.045612 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:51.045644 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:51.098299 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:51.098331 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:51.135309 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:51.135342 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:51.191580 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:51.183846    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.184481    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186082    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186483    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.188015    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:51.183846    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.184481    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186082    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186483    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.188015    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:51.191601 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:51.191615 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:51.218895 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:51.218923 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	W0804 10:07:52.883308 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:54.883712 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:53.739326 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:53.739815 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:53.739915 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:53.760078 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:53.760152 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:53.778771 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:53.778848 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:53.796996 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.797026 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:53.797075 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:53.815962 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:53.816032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:53.833919 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.833942 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:53.833991 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:53.852829 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:53.852894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:53.870544 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.870572 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:53.870620 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:53.888900 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.888923 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:53.888941 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:53.888954 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:53.909456 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:53.909482 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:53.959416 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:53.959451 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:53.979376 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:53.979406 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:54.015365 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:54.015393 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:54.092580 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:54.092627 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:54.119325 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:54.119436 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:54.178242 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:54.170338    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.171010    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172560    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172976    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.174509    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:54.170338    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.171010    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172560    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172976    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.174509    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:54.178266 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:54.178288 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:54.205571 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:54.205602 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:54.226781 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:54.226811 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:56.772513 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:56.773019 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:56.773137 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:56.792596 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:56.792666 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:56.810823 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:56.810896 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:56.828450 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.828480 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:56.828532 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:56.847167 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:56.847237 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:56.866291 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.866315 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:56.866358 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:56.884828 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:56.884907 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:56.905059 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.905088 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:56.905134 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:56.923381 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.923417 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:56.923435 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:56.923447 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:56.943931 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:56.943957 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:56.986803 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:56.986835 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:57.013326 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:57.013360 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:57.068200 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:57.060866    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.061398    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.062981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.063498    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.064981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:57.060866    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.061398    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.062981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.063498    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.064981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:57.068220 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:57.068232 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:57.093915 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:57.093943 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:57.144935 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:57.144969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:57.166788 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:57.166813 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:57.188225 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:57.188254 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:57.224405 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:57.224433 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 10:07:56.883778 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:59.383176 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:59.805597 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:59.806058 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:59.806152 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:59.824866 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:59.824944 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:59.843663 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:59.843753 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:59.861286 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.861306 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:59.861356 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:59.880494 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:59.880573 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:59.898827 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.898851 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:59.898894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:59.917517 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:59.917584 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:59.935879 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.935906 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:59.935963 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:59.954233 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.954264 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:59.954284 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:59.954302 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:59.980238 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:59.980271 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:00.037175 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:00.029528    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.030067    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.031620    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.032023    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.033553    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:00.029528    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.030067    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.031620    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.032023    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.033553    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:00.037200 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:00.037215 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:00.079854 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:00.079889 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:00.117813 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:00.117842 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:00.199625 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:00.199671 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:00.225938 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:00.225969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:00.246825 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:00.246857 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:00.300311 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:00.300362 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:00.322075 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:00.322105 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:08:01.383269 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:02.842602 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:02.843031 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:02.843128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:02.862419 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:02.862503 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:02.881322 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:02.881409 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:02.902962 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.902986 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:02.903039 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:02.922238 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:02.922315 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:02.940312 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.940340 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:02.940391 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:02.960494 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:02.960580 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:02.978877 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.978915 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:02.978977 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:02.996894 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.996918 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:02.996937 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:02.996951 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:03.060369 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:03.060412 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:03.100294 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:03.100320 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:03.128232 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:03.128269 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:03.149215 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:03.149276 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:03.168809 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:03.168839 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:03.244969 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:03.245019 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:03.302519 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:03.294536    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.295054    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.296664    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.297129    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.298652    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:03.294536    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.295054    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.296664    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.297129    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.298652    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:03.302541 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:03.302555 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:03.328592 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:03.328621 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:03.349409 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:03.349436 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:05.892519 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:05.892926 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:05.893018 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:05.912863 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:05.912930 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:05.931765 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:05.931842 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:05.949624 2163332 logs.go:282] 0 containers: []
	W0804 10:08:05.949651 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:05.949706 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:05.969017 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:05.969096 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:05.987253 2163332 logs.go:282] 0 containers: []
	W0804 10:08:05.987279 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:05.987338 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:06.006096 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:06.006174 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:06.023866 2163332 logs.go:282] 0 containers: []
	W0804 10:08:06.023898 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:06.023955 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:06.041554 2163332 logs.go:282] 0 containers: []
	W0804 10:08:06.041574 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:06.041592 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:06.041603 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:06.078088 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:06.078114 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:06.160862 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:06.160907 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:06.187395 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:06.187425 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:06.243359 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:06.235931    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.236430    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.237921    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.238444    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.239969    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:06.235931    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.236430    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.237921    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.238444    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.239969    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:06.243387 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:06.243404 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:06.269689 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:06.269719 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:06.290404 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:06.290435 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:06.310595 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:06.310619 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:06.330304 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:06.330331 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:06.372930 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:06.372969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:08.923937 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:08.924354 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:08.924450 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:08.943688 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:08.943758 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:08.963008 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:08.963079 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:08.981372 2163332 logs.go:282] 0 containers: []
	W0804 10:08:08.981400 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:08.981453 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:08.999509 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:08.999592 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:09.017857 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.017881 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:09.017930 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:09.036581 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:09.036643 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:09.054584 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.054613 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:09.054666 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:09.072888 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.072924 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:09.072949 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:09.072965 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:09.149606 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:09.149645 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:09.178148 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:09.178185 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:09.222507 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:09.222544 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:09.275195 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:09.275235 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:09.299125 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:09.299159 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:09.319703 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:09.319747 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:09.346880 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:09.346922 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:09.404327 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:09.396630    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.397126    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.398704    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.399191    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.400813    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:09.396630    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.397126    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.398704    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.399191    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.400813    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:09.404352 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:09.404367 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:09.425425 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:09.425452 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:11.963472 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:11.963939 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:11.964032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:11.983012 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:11.983080 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:12.001567 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:12.001629 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:12.019335 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.019361 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:12.019428 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:12.038818 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:12.038893 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:12.056951 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.056978 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:12.057022 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:12.075232 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:12.075305 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:12.092737 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.092758 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:12.092800 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:12.109994 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.110024 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:12.110044 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:12.110055 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:12.166801 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:12.158687   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.159257   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.160910   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.161382   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.162961   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:12.158687   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.159257   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.160910   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.161382   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.162961   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:12.166825 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:12.166842 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:12.192505 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:12.192533 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:12.213260 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:12.213294 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:12.234230 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:12.234264 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:12.254032 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:12.254068 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:12.336496 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:12.336538 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:12.362829 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:12.362860 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:12.404783 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:12.404822 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:12.456932 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:12.456963 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:08:12.885483 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:08:14.998006 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:14.998459 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:14.998558 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:15.018639 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:15.018726 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:15.037594 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:15.037664 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:15.055647 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.055675 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:15.055720 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:15.073464 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:15.073538 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:15.091563 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.091588 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:15.091636 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:15.110381 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:15.110457 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:15.128744 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.128766 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:15.128811 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:15.147315 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.147336 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:15.147350 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:15.147369 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:15.167872 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:15.167908 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:15.211657 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:15.211690 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:15.233001 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:15.233026 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:15.252541 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:15.252580 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:15.291017 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:15.291044 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:15.316967 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:15.317004 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:15.343514 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:15.343543 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:15.394164 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:15.394201 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:15.475808 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:15.475847 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:15.532790 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:15.525410   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.525962   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527526   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527890   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.529344   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:15.525410   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.525962   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527526   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527890   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.529344   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:18.033614 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:18.034099 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:18.034190 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:18.053426 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:18.053519 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:18.072396 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:18.072461 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:18.090428 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.090453 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:18.090519 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:18.109580 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:18.109661 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:18.127869 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.127899 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:18.127954 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:18.146622 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:18.146695 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:18.165973 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.165995 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:18.166038 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:18.183152 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.183175 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:18.183190 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:18.183204 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:18.239841 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:18.232099   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.232612   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234166   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234591   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.236113   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:18.232099   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.232612   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234166   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234591   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.236113   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:18.239862 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:18.239874 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:18.260920 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:18.260946 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:18.304135 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:18.304170 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:18.356641 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:18.356679 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:18.376311 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:18.376341 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:18.460920 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:18.460965 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:18.488725 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:18.488755 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:18.509858 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:18.509894 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:18.546219 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:18.546248 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:21.073317 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:21.073860 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:21.073971 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:21.093222 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:21.093346 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:21.111951 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:21.112042 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:21.130287 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.130308 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:21.130359 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:21.148384 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:21.148471 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:21.166576 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.166604 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:21.166652 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:21.185348 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:21.185427 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:21.203596 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.203622 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:21.203681 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:21.221592 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.221620 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:21.221640 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:21.221652 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:21.277441 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:21.269692   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.270305   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.271725   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.272213   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.273739   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:21.269692   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.270305   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.271725   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.272213   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.273739   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:21.277466 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:21.277482 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:21.298481 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:21.298511 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:21.350381 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:21.350418 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:21.371474 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:21.371501 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:21.408284 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:21.408313 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:21.485994 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:21.486031 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:21.512310 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:21.512339 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:21.539196 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:21.539228 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:21.581887 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:21.581920 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:08:22.886436 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	W0804 10:08:25.383211 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:24.102885 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:24.103356 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:24.103454 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:24.123078 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:24.123144 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:24.141483 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:24.141545 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:24.159538 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.159565 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:24.159610 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:24.177499 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:24.177574 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:24.195218 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.195246 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:24.195289 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:24.213410 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:24.213501 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:24.231595 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.231619 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:24.231675 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:24.250451 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.250478 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:24.250497 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:24.250511 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:24.269653 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:24.269681 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:24.348982 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:24.349027 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:24.405452 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:24.397972   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.398529   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400132   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400600   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.402109   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:24.397972   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.398529   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400132   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400600   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.402109   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:24.405476 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:24.405491 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:24.431565 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:24.431593 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:24.469920 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:24.469948 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:24.495911 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:24.495942 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:24.516767 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:24.516796 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:24.559809 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:24.559846 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:24.612215 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:24.612251 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:27.134399 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:27.134902 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:27.135016 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:27.154460 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:27.154526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:27.172467 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:27.172537 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:27.190547 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.190571 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:27.190626 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:27.208406 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:27.208478 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:27.226270 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.226293 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:27.226347 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:27.244648 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:27.244710 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:27.262363 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.262384 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:27.262429 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:27.280761 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.280791 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:27.280811 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:27.280828 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:27.337516 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:27.329752   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.330367   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.331865   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.332331   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.333862   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:27.329752   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.330367   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.331865   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.332331   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.333862   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:27.337538 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:27.337554 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:27.383205 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:27.383237 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:27.402831 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:27.402863 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:27.439987 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:27.440016 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:27.467188 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:27.467220 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:27.488626 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:27.488651 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:27.538307 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:27.538341 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:27.558848 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:27.558875 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:27.640317 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:27.640360 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 10:08:27.383261 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:29.883318 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:30.169015 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:30.169492 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:30.169591 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:30.188919 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:30.189000 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:30.208903 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:30.208986 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:30.226974 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.227006 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:30.227061 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:30.245555 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:30.245625 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:30.263987 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.264013 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:30.264059 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:30.282944 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:30.283023 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:30.301744 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.301773 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:30.301834 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:30.320893 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.320919 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:30.320936 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:30.320951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:30.397888 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:30.397925 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:30.418812 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:30.418837 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:30.464089 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:30.464123 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:30.484745 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:30.484778 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:30.504805 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:30.504837 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:30.530475 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:30.530511 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:30.586445 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:30.578622   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.579233   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.580788   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.581197   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.582760   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:30.578622   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.579233   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.580788   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.581197   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.582760   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:30.586465 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:30.586478 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:30.613024 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:30.613054 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:30.666024 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:30.666060 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:08:31.883721 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:34.383160 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:33.203579 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:33.204060 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:33.204180 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:33.223272 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:33.223341 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:33.242111 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:33.242191 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:33.260564 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.260587 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:33.260632 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:33.279120 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:33.279198 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:33.297558 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.297581 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:33.297626 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:33.315911 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:33.315987 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:33.334504 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.334534 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:33.334594 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:33.352831 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.352855 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:33.352876 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:33.352891 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:33.431146 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:33.431188 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:33.457483 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:33.457516 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:33.512587 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:33.505280   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.505794   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507387   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507829   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.509409   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:33.505280   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.505794   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507387   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507829   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.509409   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:33.512614 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:33.512630 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:33.563154 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:33.563186 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:33.584703 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:33.584730 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:33.603831 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:33.603862 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:33.641549 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:33.641579 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:33.667027 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:33.667056 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:33.688258 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:33.688291 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:36.234388 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:36.234842 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:36.234932 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:36.253452 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:36.253531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:36.272517 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:36.272578 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:36.290793 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.290815 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:36.290859 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:36.309868 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:36.309951 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:36.328038 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.328065 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:36.328128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:36.346447 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:36.346526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:36.364698 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.364720 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:36.364774 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:36.382618 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.382649 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:36.382672 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:36.382687 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:36.460757 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:36.460795 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:36.517181 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:36.509281   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.509826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511400   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.513375   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:36.509281   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.509826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511400   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.513375   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:36.517202 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:36.517218 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:36.570857 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:36.570896 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:36.590896 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:36.590929 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:36.616290 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:36.616323 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:36.643271 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:36.643298 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:36.663678 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:36.663704 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:36.708665 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:36.708695 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:36.729524 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:36.729551 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:08:36.383928 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:38.883516 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:39.267469 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:39.267990 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:39.268120 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:39.287780 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:39.287877 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:39.307153 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:39.307248 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:39.326719 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.326752 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:39.326810 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:39.345319 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:39.345387 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:39.363424 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.363455 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:39.363511 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:39.381746 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:39.381825 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:39.399785 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.399809 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:39.399862 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:39.419064 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.419095 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:39.419121 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:39.419136 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:39.501950 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:39.501998 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:39.528491 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:39.528525 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:39.585466 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:39.578061   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.578577   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580045   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580462   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.581949   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:39.578061   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.578577   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580045   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580462   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.581949   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:39.585497 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:39.585518 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:39.611559 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:39.611590 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:39.632402 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:39.632438 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:39.677721 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:39.677758 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:39.728453 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:39.728487 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:39.752029 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:39.752060 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:39.772376 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:39.772408 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:42.311175 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:42.311726 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:42.311836 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:42.331694 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:42.331761 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:42.350128 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:42.350202 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:42.368335 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.368358 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:42.368411 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:42.385942 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:42.386020 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:42.403768 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.403788 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:42.403840 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:42.422612 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:42.422679 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:42.439585 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.439609 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:42.439659 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:42.457208 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.457229 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:42.457263 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:42.457279 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:42.535545 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:42.535578 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:42.561612 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:42.561641 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:42.616811 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:42.609048   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.609673   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611215   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611642   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.613094   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:42.609048   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.609673   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611215   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611642   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.613094   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:42.616832 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:42.616847 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:42.643211 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:42.643240 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:42.663882 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:42.663910 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:42.683025 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:42.683052 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:42.722746 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:42.722772 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:42.743550 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:42.743589 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:42.788986 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:42.789023 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:45.340596 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:45.341080 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:45.343076 2163332 out.go:201] 
	W0804 10:08:45.344232 2163332 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0804 10:08:45.344248 2163332 out.go:270] * 
	W0804 10:08:45.346020 2163332 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 10:08:45.347852 2163332 out.go:201] 
	W0804 10:08:40.883920 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:42.884060 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:45.384074 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:47.883235 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:50.383116 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:52.383162 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:54.383410 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:56.383810 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:58.883290 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:00.883650 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:03.383190 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:05.383617 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	
	
	==> Docker <==
	Aug 04 10:04:39 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:39Z" level=info msg="Loaded network plugin cni"
	Aug 04 10:04:39 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:39Z" level=info msg="Docker cri networking managed by network plugin cni"
	Aug 04 10:04:39 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:39Z" level=info msg="Setting cgroupDriver cgroupfs"
	Aug 04 10:04:39 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Aug 04 10:04:39 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:39Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Aug 04 10:04:39 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:39Z" level=info msg="Start cri-dockerd grpc backend"
	Aug 04 10:04:39 newest-cni-768931 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Aug 04 10:04:43 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8291adcc91b97cb252a24d35036c5efbb0996a08027e74bce7b3e0a6bf9a48cf/resolv.conf as [nameserver 192.168.76.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Aug 04 10:04:43 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2bc437b51e69e3c519e0761ce89040cfdde58b82f6e145391cd6e0c2ab5e208e/resolv.conf as [nameserver 192.168.76.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 10:04:43 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/662feb1b8623b8a2e29aa4611d37b1170731bd5f7a2dc897b5f52883c376bec1/resolv.conf as [nameserver 192.168.76.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 10:04:43 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4c205ed51dffe9b5b86784e923411ac6c4cd45de2c5e2e4648ad44b601456c17/resolv.conf as [nameserver 192.168.76.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Aug 04 10:04:44 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:04:44.183658975Z" level=info msg="ignoring event" container=cf7f705039858fd1e9136035e31987c37daa6edfab66c046bf64e03096b58692 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:02 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:05:02.012772715Z" level=info msg="ignoring event" container=2d096260eba4cf41bd065888c7f500814d5de630a1b1fc361f3947127b35e4fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:05 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:05:05.203874486Z" level=info msg="ignoring event" container=059756d38779c9ce2222befd10f7581bfad8f269e0d6bfe172215d53cbd82572 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:06 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:05:06.230520357Z" level=info msg="ignoring event" container=e3a6308944b3d968179e3c495ba3e3438fbf285b19cf9bbf07d2965692300547 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:30 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:05:30.005635105Z" level=info msg="ignoring event" container=bf239ceabd3147fe0e012eb9801492d77876a7ddd93fc0159b21dd207d7c3afc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:43 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:05:43.942876680Z" level=info msg="ignoring event" container=649f5e5c295c89600065ff6074421cadc3ed95db0690cfcfe15ce4a3ac4ac6db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:44 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:05:44.965198998Z" level=info msg="ignoring event" container=69f71bfef17b06cc8a5dc342463c94500db45e0e165608d96196bb1b17386196 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:06:12 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:06:12.006979008Z" level=info msg="ignoring event" container=62ad65a28324db44aec25b62a7b821e13717955c2910052ef5c10903fccd8507 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:06:27 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:06:27.859049038Z" level=info msg="ignoring event" container=806e7ebaaed1d1e4b1ed1116680ed33d3a9dc5d38319656b66d38586e6c02dea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:06:38 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:06:38.884007403Z" level=info msg="ignoring event" container=5321aae275b78662386b9386b19106ba3fd44d1c6a82e71ef1952c2c46335d24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:07:35 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:07:35.009931999Z" level=info msg="ignoring event" container=1f24d4315f70231c2695d277a5b8b9d24336254281ca6e077105280d5e5f618f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:07:40 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:07:40.764373891Z" level=info msg="ignoring event" container=db8e2ca87b17366e2e40aa7f7717aab1abd1be0b804290d9c2836790e07bc239 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:07:40 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:07:40.807931491Z" level=info msg="ignoring event" container=546ccc0d47d3f88d8d23afa8e595ee1538bdb059d62110fe9c682afd3e017027 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:08:51 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:08:51.174324084Z" level=info msg="ignoring event" container=ba73e77719612f70c2bf982e456d9df249c6091fea00a99f39da19aa30b97400 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	390ff084d3a66       d85eea91cc41d       17 seconds ago       Running             kube-apiserver            10                  2bc437b51e69e       kube-apiserver-newest-cni-768931
	38bc7e4cff02c       9ad783615e1bc       17 seconds ago       Running             kube-controller-manager   10                  4c205ed51dffe       kube-controller-manager-newest-cni-768931
	ba73e77719612       1e30c0b1e9b99       17 seconds ago       Exited              etcd                      11                  8291adcc91b97       etcd-newest-cni-768931
	546ccc0d47d3f       d85eea91cc41d       About a minute ago   Exited              kube-apiserver            9                   2bc437b51e69e       kube-apiserver-newest-cni-768931
	db8e2ca87b173       9ad783615e1bc       About a minute ago   Exited              kube-controller-manager   9                   4c205ed51dffe       kube-controller-manager-newest-cni-768931
	4d9bcb7668482       21d34a2aeacf5       4 minutes ago        Running             kube-scheduler            1                   662feb1b8623b       kube-scheduler-newest-cni-768931
	89bc4723825bb       21d34a2aeacf5       10 minutes ago       Exited              kube-scheduler            0                   6c135c15276d7       kube-scheduler-newest-cni-768931
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:09:12.498429   12784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:40172->[::1]:8443: read: connection reset by peer"
	E0804 10:09:12.500373   12784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:09:12.500912   12784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:09:12.502543   12784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:09:12.503035   12784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.003976] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-30ac57a033af
	[  +0.000006] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +3.807738] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000008] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.000000] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.251962] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-30ac57a033af
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-30ac57a033af
	[  +0.000007] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.000000] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +7.935446] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000007] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000034] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.003972] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000005] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[ +23.237968] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 e9 0e 42 0b 64 08 06
	[  +0.000446] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 d5 e2 93 f6 db 08 06
	[Aug 4 10:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da a7 c8 ad 52 b3 08 06
	[  +0.000606] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff da d5 10 fe 4e 73 08 06
	
	
	==> etcd [ba73e7771961] <==
	command /bin/bash -c "docker logs --tail 25 ba73e7771961" failed with error: /bin/bash -c "docker logs --tail 25 ba73e7771961": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: ba73e7771961
	
	
	==> kernel <==
	 10:09:12 up 1 day, 18:50,  0 users,  load average: 0.57, 1.24, 1.68
	Linux newest-cni-768931 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [390ff084d3a6] <==
	W0804 10:08:51.476989       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:51.477011       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 10:08:51.478158       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 10:08:51.486773       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 10:08:51.491358       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 10:08:51.491377       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 10:08:51.491602       1 instance.go:232] Using reconciler: lease
	W0804 10:08:51.492344       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:51.492357       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:52.478052       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:52.478057       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:52.492750       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:53.836793       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:54.120157       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:54.352717       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:56.454022       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:56.928715       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:57.299558       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:09:00.091869       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:09:00.403086       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:09:02.044821       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:09:07.699890       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:09:08.032299       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:09:08.925789       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 10:09:11.492491       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [546ccc0d47d3] <==
	command /bin/bash -c "docker logs --tail 25 546ccc0d47d3" failed with error: /bin/bash -c "docker logs --tail 25 546ccc0d47d3": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 546ccc0d47d3
	
	
	==> kube-controller-manager [38bc7e4cff02] <==
	I0804 10:08:51.598106       1 serving.go:386] Generated self-signed cert in-memory
	I0804 10:08:52.259759       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 10:08:52.259788       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 10:08:52.261580       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 10:08:52.261682       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 10:08:52.262305       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 10:08:52.262841       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 10:09:12.498903       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.76.2:8443/healthz\": dial tcp 192.168.76.2:8443: connect: connection refused"
	
	
	==> kube-controller-manager [db8e2ca87b17] <==
	I0804 10:07:19.258524       1 serving.go:386] Generated self-signed cert in-memory
	I0804 10:07:19.728823       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 10:07:19.728848       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 10:07:19.730316       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 10:07:19.730331       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 10:07:19.730674       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 10:07:19.730778       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 10:07:40.734487       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.76.2:8443/healthz\": net/http: TLS handshake timeout"
	
	
	==> kube-scheduler [4d9bcb766848] <==
	E0804 10:08:02.884075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:08:04.005685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 10:08:08.149988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:08:15.011870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 10:08:17.251091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.76.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 10:08:21.696623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 10:08:22.519039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 10:08:24.418812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:08:27.814522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:08:31.976195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 10:08:32.712898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 10:08:33.723365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:08:44.369034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 10:08:46.240912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 10:08:48.139294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:09:01.624578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 10:09:03.277040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 10:09:03.738631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 10:09:07.548697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 10:09:08.025746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 10:09:09.346339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:09:12.498492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:47086->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:09:12.498492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:45256->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 10:09:12.498514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:47080->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:09:12.498525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:47070->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	
	
	==> kube-scheduler [89bc4723825b] <==
	E0804 10:03:40.497585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:03:41.644446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.76.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 10:03:46.793027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 10:03:47.129343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 10:03:47.498649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:03:49.482712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:60970->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:03:49.482712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:43076->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 10:03:49.482712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:60974->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 10:03:49.482728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:43066->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:03:49.518652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:03:52.175953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 10:03:52.381066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 10:04:06.761064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 10:04:06.975695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 10:04:08.623458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 10:04:16.963592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 10:04:22.569447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:04:23.629502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 10:04:24.298423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:04:25.174292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 10:04:25.897947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:04:28.497132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 10:04:29.219349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 10:04:31.307534       1 server.go:274] "handlers are not fully synchronized" err="context canceled"
	E0804 10:04:31.307656       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 04 10:09:01 newest-cni-768931 kubelet[12406]: E0804 10:09:01.838623   12406 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Aug 04 10:09:01 newest-cni-768931 kubelet[12406]: E0804 10:09:01.883184   12406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/newest-cni-768931?timeout=10s\": context deadline exceeded" interval="1.6s"
	Aug 04 10:09:02 newest-cni-768931 kubelet[12406]: I0804 10:09:02.086617   12406 kubelet_node_status.go:75] "Attempting to register node" node="newest-cni-768931"
	Aug 04 10:09:02 newest-cni-768931 kubelet[12406]: E0804 10:09:02.943601   12406 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.76.2:8443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{newest-cni-768931.1858887e33be7b55  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:newest-cni-768931,UID:newest-cni-768931,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:newest-cni-768931,},FirstTimestamp:2025-08-04 10:08:50.476186453 +0000 UTC m=+0.059980287,LastTimestamp:2025-08-04 10:08:50.476186453 +0000 UTC m=+0.059980287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:newest-cni-768931,}"
	Aug 04 10:09:09 newest-cni-768931 kubelet[12406]: E0804 10:09:09.993547   12406 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-768931\" not found" node="newest-cni-768931"
	Aug 04 10:09:10 newest-cni-768931 kubelet[12406]: E0804 10:09:10.497892   12406 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-768931\" not found" node="newest-cni-768931"
	Aug 04 10:09:10 newest-cni-768931 kubelet[12406]: I0804 10:09:10.497967   12406 scope.go:117] "RemoveContainer" containerID="ba73e77719612f70c2bf982e456d9df249c6091fea00a99f39da19aa30b97400"
	Aug 04 10:09:10 newest-cni-768931 kubelet[12406]: E0804 10:09:10.581927   12406 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"newest-cni-768931\" not found"
	Aug 04 10:09:10 newest-cni-768931 kubelet[12406]: I0804 10:09:10.706605   12406 scope.go:117] "RemoveContainer" containerID="ba73e77719612f70c2bf982e456d9df249c6091fea00a99f39da19aa30b97400"
	Aug 04 10:09:10 newest-cni-768931 kubelet[12406]: E0804 10:09:10.707446   12406 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-768931\" not found" node="newest-cni-768931"
	Aug 04 10:09:10 newest-cni-768931 kubelet[12406]: I0804 10:09:10.707530   12406 scope.go:117] "RemoveContainer" containerID="6c8f8998a2b067db2d2efe340572f57487ad60b7119d3b66cb8ad53ecef9b764"
	Aug 04 10:09:10 newest-cni-768931 kubelet[12406]: E0804 10:09:10.707691   12406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=etcd pod=etcd-newest-cni-768931_kube-system(0a578c02c1067bda6f15c5033e01f33e)\"" pod="kube-system/etcd-newest-cni-768931" podUID="0a578c02c1067bda6f15c5033e01f33e"
	Aug 04 10:09:11 newest-cni-768931 kubelet[12406]: E0804 10:09:11.497195   12406 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.76.2:8443/api/v1/nodes\": read tcp 192.168.76.2:45230->192.168.76.2:8443: read: connection reset by peer" node="newest-cni-768931"
	Aug 04 10:09:11 newest-cni-768931 kubelet[12406]: I0804 10:09:11.718129   12406 scope.go:117] "RemoveContainer" containerID="546ccc0d47d3f88d8d23afa8e595ee1538bdb059d62110fe9c682afd3e017027"
	Aug 04 10:09:11 newest-cni-768931 kubelet[12406]: E0804 10:09:11.719063   12406 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-768931\" not found" node="newest-cni-768931"
	Aug 04 10:09:11 newest-cni-768931 kubelet[12406]: I0804 10:09:11.719156   12406 scope.go:117] "RemoveContainer" containerID="390ff084d3a669e6950f243be8c00786d4d8c14b1f1c1caf7df9599b865d1a38"
	Aug 04 10:09:11 newest-cni-768931 kubelet[12406]: E0804 10:09:11.719337   12406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-newest-cni-768931_kube-system(59d53768f66016db0d7a945479ffe178)\"" pod="kube-system/kube-apiserver-newest-cni-768931" podUID="59d53768f66016db0d7a945479ffe178"
	Aug 04 10:09:11 newest-cni-768931 kubelet[12406]: E0804 10:09:11.723840   12406 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-768931\" not found" node="newest-cni-768931"
	Aug 04 10:09:11 newest-cni-768931 kubelet[12406]: I0804 10:09:11.723909   12406 scope.go:117] "RemoveContainer" containerID="6c8f8998a2b067db2d2efe340572f57487ad60b7119d3b66cb8ad53ecef9b764"
	Aug 04 10:09:11 newest-cni-768931 kubelet[12406]: E0804 10:09:11.724036   12406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=etcd pod=etcd-newest-cni-768931_kube-system(0a578c02c1067bda6f15c5033e01f33e)\"" pod="kube-system/etcd-newest-cni-768931" podUID="0a578c02c1067bda6f15c5033e01f33e"
	Aug 04 10:09:12 newest-cni-768931 kubelet[12406]: E0804 10:09:12.497788   12406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/newest-cni-768931?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:47032->192.168.76.2:8443: read: connection reset by peer" interval="3.2s"
	Aug 04 10:09:12 newest-cni-768931 kubelet[12406]: E0804 10:09:12.497846   12406 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.76.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:47066->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Aug 04 10:09:12 newest-cni-768931 kubelet[12406]: E0804 10:09:12.497856   12406 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:47052->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Aug 04 10:09:12 newest-cni-768931 kubelet[12406]: E0804 10:09:12.497871   12406 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.76.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dnewest-cni-768931&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:47050->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Aug 04 10:09:12 newest-cni-768931 kubelet[12406]: E0804 10:09:12.497953   12406 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.76.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:47042->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-768931 -n newest-cni-768931
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-768931 -n newest-cni-768931: exit status 2 (272.758939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "newest-cni-768931" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-768931
helpers_test.go:235: (dbg) docker inspect newest-cni-768931:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd",
	        "Created": "2025-08-04T09:54:35.028106074Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2163578,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T10:04:32.896051547Z",
	            "FinishedAt": "2025-08-04T10:04:31.554642323Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/hostname",
	        "HostsPath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/hosts",
	        "LogPath": "/var/lib/docker/containers/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd/056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd-json.log",
	        "Name": "/newest-cni-768931",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-768931:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-768931",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "056ddd51825ae3d9ed23f3636508ce53c5712e53181a5e3c8408b41ebd93d6bd",
	                "LowerDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e5386285c07262774b67064991fbd57df2fa46e1527bbd5b2453601a759c2a6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-768931",
	                "Source": "/var/lib/docker/volumes/newest-cni-768931/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-768931",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-768931",
	                "name.minikube.sigs.k8s.io": "newest-cni-768931",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d496a379f643afdf0008eeaa73490cdbbab104feff9921da81864e373d58ba90",
	            "SandboxKey": "/var/run/docker/netns/d496a379f643",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-768931": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:1d:38:75:59:39",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b469f2b8beae070883e49bfb67a442aa4bbac8703dfdd341c34c8d2ed3e42c07",
	                    "EndpointID": "349a5e6b8e6d705e3fe7a8f3cfcd94606e43e7038005d90f73899543e4f770f1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-768931",
	                        "056ddd51825a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-768931 -n newest-cni-768931
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-768931 -n newest-cni-768931: exit status 2 (267.290545ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-768931 logs -n 25
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                          ARGS                                                                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ delete  │ -p bridge-561540                                                                                                                                                                                                                                       │ bridge-561540     │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat docker --no-pager                                                                                                                                                                                                 │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                     │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo docker system info                                                                                                                                                                                                              │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                        │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                  │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cri-dockerd --version                                                                                                                                                                                                           │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat containerd --no-pager                                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                      │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /etc/containerd/config.toml                                                                                                                                                                                                 │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo containerd config dump                                                                                                                                                                                                          │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                   │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │                     │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat crio --no-pager                                                                                                                                                                                                   │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                         │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo crio config                                                                                                                                                                                                                     │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ delete  │ -p kubenet-561540                                                                                                                                                                                                                                      │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ stop    │ -p newest-cni-768931 --alsologtostderr -v=3                                                                                                                                                                                                            │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ addons  │ enable dashboard -p newest-cni-768931 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                           │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ start   │ -p newest-cni-768931 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0 │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │                     │
	│ image   │ newest-cni-768931 image list --format=json                                                                                                                                                                                                             │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:08 UTC │ 04 Aug 25 10:08 UTC │
	│ pause   │ -p newest-cni-768931 --alsologtostderr -v=1                                                                                                                                                                                                            │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:08 UTC │ 04 Aug 25 10:08 UTC │
	│ unpause │ -p newest-cni-768931 --alsologtostderr -v=1                                                                                                                                                                                                            │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:08 UTC │ 04 Aug 25 10:08 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 10:04:32
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 10:04:32.687485 2163332 out.go:345] Setting OutFile to fd 1 ...
	I0804 10:04:32.687601 2163332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 10:04:32.687610 2163332 out.go:358] Setting ErrFile to fd 2...
	I0804 10:04:32.687614 2163332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 10:04:32.687787 2163332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 10:04:32.688302 2163332 out.go:352] Setting JSON to false
	I0804 10:04:32.689384 2163332 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":153962,"bootTime":1754147911,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 10:04:32.689473 2163332 start.go:140] virtualization: kvm guest
	I0804 10:04:32.691276 2163332 out.go:177] * [newest-cni-768931] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 10:04:32.692852 2163332 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 10:04:32.692888 2163332 notify.go:220] Checking for updates...
	I0804 10:04:32.695015 2163332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 10:04:32.696142 2163332 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:32.697215 2163332 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 10:04:32.698321 2163332 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 10:04:32.699270 2163332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 10:04:32.700616 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:32.701052 2163332 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 10:04:32.723805 2163332 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 10:04:32.723883 2163332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 10:04:32.778232 2163332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 10:04:32.768372933 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 10:04:32.778341 2163332 docker.go:318] overlay module found
	I0804 10:04:32.779801 2163332 out.go:177] * Using the docker driver based on existing profile
	I0804 10:04:32.780788 2163332 start.go:304] selected driver: docker
	I0804 10:04:32.780822 2163332 start.go:918] validating driver "docker" against &{Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:32.780895 2163332 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 10:04:32.781839 2163332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 10:04:32.827839 2163332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 10:04:32.819484271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 10:04:32.828202 2163332 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0804 10:04:32.828229 2163332 cni.go:84] Creating CNI manager for ""
	I0804 10:04:32.828284 2163332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 10:04:32.828323 2163332 start.go:348] cluster config:
	{Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:32.830455 2163332 out.go:177] * Starting "newest-cni-768931" primary control-plane node in "newest-cni-768931" cluster
	I0804 10:04:32.831301 2163332 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 10:04:32.832264 2163332 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 10:04:32.833160 2163332 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 10:04:32.833198 2163332 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 10:04:32.833213 2163332 cache.go:56] Caching tarball of preloaded images
	I0804 10:04:32.833291 2163332 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 10:04:32.833335 2163332 preload.go:172] Found /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 10:04:32.833346 2163332 cache.go:59] Finished verifying existence of preloaded tar for v1.34.0-beta.0 on docker
	I0804 10:04:32.833466 2163332 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/config.json ...
	I0804 10:04:32.853043 2163332 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 10:04:32.853066 2163332 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 10:04:32.853089 2163332 cache.go:230] Successfully downloaded all kic artifacts
	I0804 10:04:32.853130 2163332 start.go:360] acquireMachinesLock for newest-cni-768931: {Name:mk60747b86b31a8b440009760f939cd98b70b1b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 10:04:32.853200 2163332 start.go:364] duration metric: took 46.728µs to acquireMachinesLock for "newest-cni-768931"
	I0804 10:04:32.853224 2163332 start.go:96] Skipping create...Using existing machine configuration
	I0804 10:04:32.853234 2163332 fix.go:54] fixHost starting: 
	I0804 10:04:32.853483 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:32.870192 2163332 fix.go:112] recreateIfNeeded on newest-cni-768931: state=Stopped err=<nil>
	W0804 10:04:32.870218 2163332 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 10:04:32.871722 2163332 out.go:177] * Restarting existing docker container for "newest-cni-768931" ...
	W0804 10:04:33.885027 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:04:32.872698 2163332 cli_runner.go:164] Run: docker start newest-cni-768931
	I0804 10:04:33.099718 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:33.118449 2163332 kic.go:430] container "newest-cni-768931" state is running.
	I0804 10:04:33.118905 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:33.137343 2163332 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/config.json ...
	I0804 10:04:33.137542 2163332 machine.go:93] provisionDockerMachine start ...
	I0804 10:04:33.137597 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:33.155160 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:33.155419 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:33.155437 2163332 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 10:04:33.156072 2163332 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58734->127.0.0.1:33169: read: connection reset by peer
	I0804 10:04:36.284896 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-768931
	
	I0804 10:04:36.284952 2163332 ubuntu.go:169] provisioning hostname "newest-cni-768931"
	I0804 10:04:36.285030 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.302808 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.303033 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.303047 2163332 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-768931 && echo "newest-cni-768931" | sudo tee /etc/hostname
	I0804 10:04:36.436070 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-768931
	
	I0804 10:04:36.436155 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.453360 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.453580 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.453597 2163332 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-768931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-768931/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-768931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 10:04:36.577177 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 10:04:36.577204 2163332 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 10:04:36.577269 2163332 ubuntu.go:177] setting up certificates
	I0804 10:04:36.577284 2163332 provision.go:84] configureAuth start
	I0804 10:04:36.577338 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:36.594945 2163332 provision.go:143] copyHostCerts
	I0804 10:04:36.595024 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 10:04:36.595052 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 10:04:36.595122 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 10:04:36.595229 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 10:04:36.595240 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 10:04:36.595279 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 10:04:36.595353 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 10:04:36.595363 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 10:04:36.595397 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 10:04:36.595465 2163332 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.newest-cni-768931 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-768931]
	I0804 10:04:36.675231 2163332 provision.go:177] copyRemoteCerts
	I0804 10:04:36.675299 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 10:04:36.675408 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.693281 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:36.786243 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 10:04:36.808201 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 10:04:36.829564 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 10:04:36.851320 2163332 provision.go:87] duration metric: took 274.022098ms to configureAuth
	I0804 10:04:36.851348 2163332 ubuntu.go:193] setting minikube options for container-runtime
	I0804 10:04:36.851551 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:36.851596 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.868506 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.868714 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.868725 2163332 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 10:04:36.993642 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 10:04:36.993669 2163332 ubuntu.go:71] root file system type: overlay
	I0804 10:04:36.993814 2163332 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 10:04:36.993894 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.011512 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:37.011804 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:37.011909 2163332 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 10:04:37.144143 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 10:04:37.144254 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.163872 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:37.164133 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:37.164159 2163332 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 10:04:37.294409 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 10:04:37.294438 2163332 machine.go:96] duration metric: took 4.156880869s to provisionDockerMachine
	I0804 10:04:37.294451 2163332 start.go:293] postStartSetup for "newest-cni-768931" (driver="docker")
	I0804 10:04:37.294467 2163332 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 10:04:37.294538 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 10:04:37.294594 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.312083 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.402431 2163332 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 10:04:37.405677 2163332 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 10:04:37.405711 2163332 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 10:04:37.405722 2163332 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 10:04:37.405732 2163332 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 10:04:37.405748 2163332 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 10:04:37.405809 2163332 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 10:04:37.405901 2163332 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 10:04:37.406013 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 10:04:37.414129 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 10:04:37.436137 2163332 start.go:296] duration metric: took 141.67054ms for postStartSetup
	I0804 10:04:37.436224 2163332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 10:04:37.436265 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.453687 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.541885 2163332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 10:04:37.546057 2163332 fix.go:56] duration metric: took 4.692814355s for fixHost
	I0804 10:04:37.546084 2163332 start.go:83] releasing machines lock for "newest-cni-768931", held for 4.692869693s
	I0804 10:04:37.546159 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:37.563070 2163332 ssh_runner.go:195] Run: cat /version.json
	I0804 10:04:37.563126 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.563138 2163332 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 10:04:37.563203 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.580936 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.581156 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.740866 2163332 ssh_runner.go:195] Run: systemctl --version
	I0804 10:04:37.745223 2163332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 10:04:37.749326 2163332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 10:04:37.766095 2163332 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 10:04:37.766176 2163332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 10:04:37.773788 2163332 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 10:04:37.773820 2163332 start.go:495] detecting cgroup driver to use...
	I0804 10:04:37.773849 2163332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 10:04:37.773948 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 10:04:37.788117 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:38.201785 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 10:04:38.211955 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 10:04:38.221176 2163332 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 10:04:38.221223 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 10:04:38.230298 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 10:04:38.238908 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 10:04:38.247614 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 10:04:38.256328 2163332 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 10:04:38.264446 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 10:04:38.273173 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 10:04:38.282132 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 10:04:38.290867 2163332 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 10:04:38.298323 2163332 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 10:04:38.305902 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:38.392109 2163332 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 10:04:38.481905 2163332 start.go:495] detecting cgroup driver to use...
	I0804 10:04:38.481959 2163332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 10:04:38.482006 2163332 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 10:04:38.492886 2163332 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 10:04:38.492964 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 10:04:38.507193 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 10:04:38.524383 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:38.965725 2163332 ssh_runner.go:195] Run: which cri-dockerd
	I0804 10:04:38.969614 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 10:04:38.977908 2163332 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 10:04:38.993935 2163332 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 10:04:39.070708 2163332 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 10:04:39.151070 2163332 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 10:04:39.151179 2163332 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 10:04:39.167734 2163332 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 10:04:39.179347 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.254327 2163332 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 10:04:39.556127 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 10:04:39.566948 2163332 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0804 10:04:39.577711 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 10:04:39.587256 2163332 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 10:04:39.666843 2163332 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 10:04:39.760652 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.840823 2163332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 10:04:39.853363 2163332 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 10:04:39.863091 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.939093 2163332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 10:04:39.998099 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 10:04:40.009070 2163332 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 10:04:40.009141 2163332 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 10:04:40.012496 2163332 start.go:563] Will wait 60s for crictl version
	I0804 10:04:40.012547 2163332 ssh_runner.go:195] Run: which crictl
	I0804 10:04:40.015480 2163332 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 10:04:40.047607 2163332 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 10:04:40.047667 2163332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 10:04:40.071117 2163332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 10:04:40.096346 2163332 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 10:04:40.096430 2163332 cli_runner.go:164] Run: docker network inspect newest-cni-768931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 10:04:40.113799 2163332 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0804 10:04:40.117316 2163332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 10:04:40.128718 2163332 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0804 10:04:40.129838 2163332 kubeadm.go:875] updating cluster {Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 10:04:40.130050 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:40.510582 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:40.900777 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:41.302831 2163332 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 10:04:41.303034 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:41.705389 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:42.114511 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:42.516831 2163332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 10:04:42.537600 2163332 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 10:04:42.537629 2163332 docker.go:633] Images already preloaded, skipping extraction
	I0804 10:04:42.537693 2163332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 10:04:42.556805 2163332 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 10:04:42.556830 2163332 cache_images.go:85] Images are preloaded, skipping loading
	I0804 10:04:42.556843 2163332 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0-beta.0 docker true true} ...
	I0804 10:04:42.556981 2163332 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-768931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 10:04:42.557048 2163332 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 10:04:42.603960 2163332 cni.go:84] Creating CNI manager for ""
	I0804 10:04:42.603991 2163332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 10:04:42.604000 2163332 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0804 10:04:42.604024 2163332 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-768931 NodeName:newest-cni-768931 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 10:04:42.604182 2163332 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-768931"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 10:04:42.604258 2163332 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 10:04:42.612607 2163332 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 10:04:42.612659 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 10:04:42.620777 2163332 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0804 10:04:42.637111 2163332 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 10:04:42.652929 2163332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2300 bytes)
	I0804 10:04:42.669016 2163332 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0804 10:04:42.672189 2163332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 10:04:42.681993 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:42.752820 2163332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 10:04:42.766032 2163332 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931 for IP: 192.168.76.2
	I0804 10:04:42.766057 2163332 certs.go:194] generating shared ca certs ...
	I0804 10:04:42.766079 2163332 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:42.766266 2163332 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 10:04:42.766336 2163332 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 10:04:42.766352 2163332 certs.go:256] generating profile certs ...
	I0804 10:04:42.766461 2163332 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/client.key
	I0804 10:04:42.766532 2163332 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key.a5c16e02
	I0804 10:04:42.766586 2163332 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.key
	I0804 10:04:42.766711 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 10:04:42.766752 2163332 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 10:04:42.766766 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 10:04:42.766803 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 10:04:42.766837 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 10:04:42.766912 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 10:04:42.766983 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 10:04:42.767635 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 10:04:42.790829 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 10:04:42.814436 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 10:04:42.873985 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 10:04:42.962257 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 10:04:42.987204 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 10:04:43.010504 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 10:04:43.032579 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 10:04:43.054052 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 10:04:43.074805 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 10:04:43.095457 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 10:04:43.116289 2163332 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 10:04:43.132026 2163332 ssh_runner.go:195] Run: openssl version
	I0804 10:04:43.137020 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 10:04:43.145170 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.148316 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.148363 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.154461 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 10:04:43.162454 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 10:04:43.170868 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.174158 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.174205 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.180335 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 10:04:43.188046 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 10:04:43.196142 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.199374 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.199418 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.205534 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 10:04:43.213018 2163332 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 10:04:43.215961 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 10:04:43.221714 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 10:04:43.227380 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 10:04:43.233506 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 10:04:43.239207 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 10:04:43.245036 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 10:04:43.250834 2163332 kubeadm.go:392] StartCluster: {Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:43.250956 2163332 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 10:04:43.269121 2163332 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 10:04:43.277263 2163332 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 10:04:43.277283 2163332 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0804 10:04:43.277330 2163332 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 10:04:43.285660 2163332 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 10:04:43.286263 2163332 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-768931" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:43.286552 2163332 kubeconfig.go:62] /home/jenkins/minikube-integration/21223-1578987/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-768931" cluster setting kubeconfig missing "newest-cni-768931" context setting]
	I0804 10:04:43.286984 2163332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.288423 2163332 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 10:04:43.298821 2163332 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0804 10:04:43.298859 2163332 kubeadm.go:593] duration metric: took 21.569333ms to restartPrimaryControlPlane
	I0804 10:04:43.298870 2163332 kubeadm.go:394] duration metric: took 48.062594ms to StartCluster
	I0804 10:04:43.298890 2163332 settings.go:142] acquiring lock: {Name:mk3d97f9903fe59355ed92bb92489c9b9834574a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.298958 2163332 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:43.300110 2163332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.300900 2163332 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 10:04:43.300973 2163332 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 10:04:43.301073 2163332 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-768931"
	I0804 10:04:43.301106 2163332 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-768931"
	I0804 10:04:43.301136 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:43.301159 2163332 addons.go:69] Setting dashboard=true in profile "newest-cni-768931"
	I0804 10:04:43.301172 2163332 addons.go:238] Setting addon dashboard=true in "newest-cni-768931"
	W0804 10:04:43.301179 2163332 addons.go:247] addon dashboard should already be in state true
	I0804 10:04:43.301151 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.301204 2163332 addons.go:69] Setting default-storageclass=true in profile "newest-cni-768931"
	I0804 10:04:43.301216 2163332 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-768931"
	I0804 10:04:43.301196 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.301557 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.301866 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.302384 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.303179 2163332 out.go:177] * Verifying Kubernetes components...
	I0804 10:04:43.305197 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:43.324564 2163332 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 10:04:43.325432 2163332 addons.go:238] Setting addon default-storageclass=true in "newest-cni-768931"
	I0804 10:04:43.325477 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.325866 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.326227 2163332 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:43.326249 2163332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 10:04:43.326263 2163332 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0804 10:04:43.326303 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.330702 2163332 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	W0804 10:04:43.886614 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:04:43.332193 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0804 10:04:43.332226 2163332 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0804 10:04:43.332289 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.352412 2163332 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:43.352439 2163332 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 10:04:43.352511 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.354098 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.357876 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.376872 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.566637 2163332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 10:04:43.579924 2163332 api_server.go:52] waiting for apiserver process to appear ...
	I0804 10:04:43.580007 2163332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 10:04:43.587036 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:43.661862 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:43.763049 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0804 10:04:43.763163 2163332 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0804 10:04:43.788243 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0804 10:04:43.788319 2163332 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W0804 10:04:43.865293 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.865365 2163332 retry.go:31] will retry after 305.419917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.872538 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0804 10:04:43.872570 2163332 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0804 10:04:43.875393 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.875428 2163332 retry.go:31] will retry after 145.860796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.893731 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0804 10:04:43.893755 2163332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0804 10:04:43.974563 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0804 10:04:43.974597 2163332 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0804 10:04:44.022021 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:44.068260 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0804 10:04:44.068309 2163332 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0804 10:04:44.080910 2163332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 10:04:44.164887 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0804 10:04:44.164970 2163332 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0804 10:04:44.171091 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:44.277704 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0804 10:04:44.277741 2163332 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0804 10:04:44.368026 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:44.368071 2163332 retry.go:31] will retry after 204.750775ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:44.368122 2163332 api_server.go:72] duration metric: took 1.067187806s to wait for apiserver process to appear ...
	I0804 10:04:44.368138 2163332 api_server.go:88] waiting for apiserver healthz status ...
	I0804 10:04:44.368158 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:44.368545 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:04:44.383288 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:04:44.383317 2163332 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0804 10:04:44.480138 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:04:44.573381 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:44.869120 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:45.817807 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (21.02485888s)
	W0804 10:04:45.817865 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47830->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817882 2149628 retry.go:31] will retry after 7.331884675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47830->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817886 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (18.577242103s)
	W0804 10:04:45.817921 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47842->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817941 2149628 retry.go:31] will retry after 8.626487085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47842->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.819147 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (15.673641591s)
	W0804 10:04:45.819203 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.819221 2149628 retry.go:31] will retry after 10.775617277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:46.383837 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:04:48.883614 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:49.869344 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:49.869418 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:04:51.383255 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:53.150556 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:04:53.202901 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:53.202938 2149628 retry.go:31] will retry after 10.556999875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:53.383788 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:54.445142 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:04:54.496071 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:54.496106 2149628 retry.go:31] will retry after 19.784775984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:55.384040 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:54.871144 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:54.871202 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:56.595610 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:04:56.648210 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:56.648246 2149628 retry.go:31] will retry after 19.28607151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:57.883186 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:04:59.883484 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:59.871849 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:59.871895 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:05:02.383555 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:03.761004 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:03.814105 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:03.814138 2149628 retry.go:31] will retry after 18.372442886s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:04.883286 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:04.478042 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (20.306910761s)
	W0804 10:05:04.478091 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.478126 2163332 retry.go:31] will retry after 410.995492ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.672813 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (20.192633915s)
	W0804 10:05:04.672867 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.672888 2163332 retry.go:31] will retry after 182.584114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.703068 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (20.129638597s)
	W0804 10:05:04.703115 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.703134 2163332 retry.go:31] will retry after 523.614331ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.856484 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:04.872959 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:04.873004 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:04.889864 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:05.192954 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:37594->192.168.76.2:8443: read: connection reset by peer
	I0804 10:05:05.227229 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:05:05.369063 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:05.369560 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:05.868214 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:05.868705 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:06.201020 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.344463633s)
	W0804 10:05:06.201082 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201113 2163332 retry.go:31] will retry after 482.284125ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201118 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.311218695s)
	W0804 10:05:06.201165 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:06.201186 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201211 2163332 retry.go:31] will retry after 887.479058ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201194 2163332 retry.go:31] will retry after 435.691438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.368292 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:06.368825 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:06.637302 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:06.683768 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:06.697149 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.697200 2163332 retry.go:31] will retry after 912.303037ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:06.737524 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.737566 2163332 retry.go:31] will retry after 625.926598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.868554 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:06.869018 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:07.089442 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:07.144156 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.144195 2163332 retry.go:31] will retry after 785.129731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.364509 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:07.368843 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:07.369217 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:07.420384 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.420426 2163332 retry.go:31] will retry after 1.204230636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.610548 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:07.663536 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.663566 2163332 retry.go:31] will retry after 847.493782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:07.384053 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:07.868944 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:07.869396 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:07.929533 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:07.992350 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.992381 2163332 retry.go:31] will retry after 1.598370768s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.368829 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:08.369322 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:08.511490 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:08.563819 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.563859 2163332 retry.go:31] will retry after 2.394822068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.625020 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:08.680531 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.680572 2163332 retry.go:31] will retry after 1.418436203s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.868633 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:08.869103 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:09.368624 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:09.369142 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:09.591529 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:09.645331 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:09.645367 2163332 retry.go:31] will retry after 3.361261664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:09.868611 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:09.869088 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.099510 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:10.154439 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:10.154474 2163332 retry.go:31] will retry after 1.332951383s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:10.368786 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:10.369300 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.869015 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:10.869515 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.959750 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:11.011704 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.011736 2163332 retry.go:31] will retry after 3.283196074s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.369218 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:11.369738 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:11.487993 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:11.543582 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.543631 2163332 retry.go:31] will retry after 1.836854478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.869009 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:11.869527 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:12.369134 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:12.369608 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.284114 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:05:12.868285 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:12.868757 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:13.007033 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:13.060825 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.060859 2163332 retry.go:31] will retry after 5.419314165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.368273 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:13.368846 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:13.381071 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:13.436653 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.436740 2163332 retry.go:31] will retry after 4.903205255s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.869165 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:13.869693 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.295170 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:14.348620 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:14.348654 2163332 retry.go:31] will retry after 3.265872015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:14.368685 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:14.369071 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.868586 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:14.869001 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:15.368516 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:15.368980 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:15.868561 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:15.869023 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:16.368523 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:16.368989 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:16.868494 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:16.868945 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:17.368464 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:17.368952 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:17.615361 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:17.669075 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:17.669112 2163332 retry.go:31] will retry after 4.169004534s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:15.935132 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:17.885492 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:05:17.868530 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:17.869032 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:18.340601 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:18.368999 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:18.369438 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:18.395142 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.395177 2163332 retry.go:31] will retry after 4.503631797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.480301 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:18.532269 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.532303 2163332 retry.go:31] will retry after 6.221358918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.868632 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:18.869050 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:19.368539 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:19.369007 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:19.868600 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:19.869064 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:20.368560 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:20.369023 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:20.868636 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:20.869103 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:21.368674 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:21.369151 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:21.838756 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:21.869088 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:21.869590 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:21.892280 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:21.892309 2163332 retry.go:31] will retry after 7.287119503s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:22.368833 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:22.369350 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:22.187953 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:22.869045 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:22.869518 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:22.899745 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:22.973354 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:22.973440 2163332 retry.go:31] will retry after 5.491383729s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:23.368948 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:24.754708 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:27.887543 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:05:29.439408 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (15.15524051s)
	W0804 10:05:29.439455 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45456->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:29.439566 2149628 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45456->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:05:29.441507 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (13.506331682s)
	W0804 10:05:29.441560 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:29.441583 2149628 retry.go:31] will retry after 14.271169565s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:29.441585 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.253590877s)
	W0804 10:05:29.441617 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:29.441700 2149628 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W0804 10:05:30.383305 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:28.370244 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:28.370296 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:28.465977 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:29.179675 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:32.383952 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:34.883276 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:33.371314 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:33.371380 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:05:36.883454 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:38.883897 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:38.372462 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:38.372528 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:05:41.383199 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:43.713667 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:43.766398 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:43.766528 2149628 out.go:270] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:05:43.769126 2149628 out.go:177] * Enabled addons: 
	I0804 10:05:43.770026 2149628 addons.go:514] duration metric: took 1m58.647363457s for enable addons: enabled=[]
	W0804 10:05:43.883892 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:43.373289 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:43.373454 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:44.936710 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (20.181960154s)
	W0804 10:05:44.936754 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52098->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.936774 2163332 retry.go:31] will retry after 12.603121969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52098->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939850 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (16.473803888s)
	I0804 10:05:44.939875 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (15.760161568s)
	W0804 10:05:44.939908 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52114->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:44.939909 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939927 2163332 ssh_runner.go:235] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (1.566452819s)
	I0804 10:05:44.939927 2163332 retry.go:31] will retry after 11.974707637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52114->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939942 2163332 retry.go:31] will retry after 10.364414585s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939952 2163332 logs.go:282] 2 containers: [649f5e5c295c 059756d38779]
	I0804 10:05:44.940008 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:44.959696 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:44.959763 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:44.981336 2163332 logs.go:282] 0 containers: []
	W0804 10:05:44.981364 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:44.981422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:45.001103 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:45.001170 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:45.019261 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.019295 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:45.019341 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:45.037700 2163332 logs.go:282] 2 containers: [69f71bfef17b e3a6308944b3]
	I0804 10:05:45.037776 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:45.055759 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.055792 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:45.055847 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:45.073894 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.073922 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:45.073935 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:45.073949 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:45.129417 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:45.122097    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.122637    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124224    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124675    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.126118    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:45.122097    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.122637    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124224    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124675    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.126118    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:45.129437 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:45.129450 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:45.156907 2163332 logs.go:123] Gathering logs for kube-apiserver [059756d38779] ...
	I0804 10:05:45.156940 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059756d38779"
	W0804 10:05:45.175729 2163332 logs.go:130] failed kube-apiserver [059756d38779]: command: /bin/bash -c "docker logs --tail 400 059756d38779" /bin/bash -c "docker logs --tail 400 059756d38779": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 059756d38779
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 059756d38779
	
	** /stderr **
	I0804 10:05:45.175748 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:45.175765 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:45.195944 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:45.195970 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:45.215671 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:45.215703 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:45.256918 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:45.256951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:45.283079 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:45.283122 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:45.318677 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:45.318712 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:45.370577 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:45.370621 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:45.391591 2163332 logs.go:123] Gathering logs for kube-controller-manager [e3a6308944b3] ...
	I0804 10:05:45.391616 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a6308944b3"
	I0804 10:05:45.412276 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:45.412300 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 10:05:46.384002 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:48.883850 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:47.962390 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:47.962840 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:47.962936 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:47.981464 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:47.981534 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:47.999231 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:47.999296 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:48.017739 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.017764 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:48.017806 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:48.036069 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:48.036151 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:48.053625 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.053651 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:48.053706 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:48.072069 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:48.072161 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:48.089963 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.089985 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:48.090033 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:48.107912 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.107934 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:48.107956 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:48.107972 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:48.164032 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:48.156591    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.157104    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.158718    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.159117    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.160609    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:48.156591    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.157104    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.158718    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.159117    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.160609    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:48.164052 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:48.164068 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:48.189481 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:48.189509 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:48.223302 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:48.223340 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:48.243043 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:48.243072 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:48.279568 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:48.279605 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:48.305730 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:48.305759 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:48.326737 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:48.326763 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:48.376057 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:48.376092 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:48.397266 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:48.397297 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:50.949382 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:50.949902 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:50.950009 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:50.969779 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:50.969854 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:50.988509 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:50.988586 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:51.006536 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.006565 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:51.006613 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:51.024853 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:51.024921 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:51.042617 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.042645 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:51.042689 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:51.060511 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:51.060599 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:51.079005 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.079031 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:51.079092 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:51.096451 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.096474 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:51.096489 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:51.096500 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:51.152017 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:51.152057 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:51.202478 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:51.202527 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:51.224042 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:51.224069 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:51.244633 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:51.244664 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:51.263948 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:51.263981 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:51.300099 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:51.300130 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:51.327538 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:51.327568 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:51.383029 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:51.375959    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.376437    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.377941    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.378408    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.379910    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:51.375959    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.376437    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.377941    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.378408    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.379910    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:51.383051 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:51.383067 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:51.408284 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:51.408314 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	W0804 10:05:51.384023 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:53.883929 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:53.941653 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:53.942148 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:53.942243 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:53.961471 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:53.961551 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:53.979438 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:53.979526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:53.997538 2163332 logs.go:282] 0 containers: []
	W0804 10:05:53.997559 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:53.997604 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:54.016326 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:54.016411 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:54.033583 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.033612 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:54.033663 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:54.051020 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:54.051103 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:54.068091 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.068118 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:54.068166 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:54.085797 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.085822 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:54.085842 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:54.085855 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:54.111832 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:54.111861 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:54.137672 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:54.137701 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:54.158028 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:54.158058 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:54.212546 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:54.212579 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:54.231855 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:54.231886 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:54.282575 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:54.282614 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:54.338570 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:54.331379    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.331842    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333378    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333781    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.335263    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:54.331379    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.331842    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333378    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333781    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.335263    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:54.338591 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:54.338604 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:54.373298 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:54.373329 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:54.393825 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:54.393848 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:55.304830 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:55.358381 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:55.358414 2163332 retry.go:31] will retry after 25.619477771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.915875 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:56.931223 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:56.931695 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:56.931788 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W0804 10:05:56.971520 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.971555 2163332 retry.go:31] will retry after 22.721182959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.971565 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:56.971637 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:56.989778 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:56.989869 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:57.007294 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.007316 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:57.007359 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:57.024882 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:57.024964 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:57.042858 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.042881 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:57.042935 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:57.061232 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:57.061331 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:57.078841 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.078870 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:57.078919 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:57.096724 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.096754 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:57.096778 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:57.096790 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:57.150588 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:57.150621 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:57.176804 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:57.176833 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:57.233732 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:57.225639    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.226657    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228215    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228620    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.230079    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:57.225639    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.226657    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228215    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228620    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.230079    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:57.233755 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:57.233768 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:57.270073 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:57.270109 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:57.290426 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:57.290461 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:57.327258 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:57.327286 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:57.353115 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:57.353143 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:57.373360 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:57.373392 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:57.423101 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:57.423133 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:57.540679 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:57.593367 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:57.593411 2163332 retry.go:31] will retry after 18.437511284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:55.884024 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:58.383443 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:59.945876 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:59.946354 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:59.946446 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:59.966005 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:59.966091 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:59.985617 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:59.985701 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:00.004828 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.004855 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:00.004906 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:00.023587 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:00.023651 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:00.041659 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.041680 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:00.041727 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:00.059493 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:00.059562 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:00.076712 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.076736 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:00.076779 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:00.095203 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.095222 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:00.095237 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:00.095248 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:00.113747 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:00.113775 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:00.150407 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:00.150433 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:00.202445 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:00.202486 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:00.229719 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:00.229755 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:00.255849 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:00.255878 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:00.276091 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:00.276119 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:00.297957 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:00.297986 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:00.353933 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:00.346687    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.347273    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.348805    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.349306    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.350820    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:00.346687    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.347273    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.348805    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.349306    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.350820    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:00.353953 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:00.353968 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:00.390814 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:00.390846 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	W0804 10:06:00.883216 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:03.383100 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:05.383181 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:02.945900 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:02.946356 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:02.946453 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:02.965471 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:06:02.965535 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:02.983934 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:06:02.984001 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:03.002213 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.002237 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:03.002285 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:03.021772 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:03.021856 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:03.039529 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.039554 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:03.039612 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:03.057939 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:03.058004 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:03.076289 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.076310 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:03.076355 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:03.094117 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.094146 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:03.094167 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:03.094182 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:03.130756 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:03.130783 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:03.187120 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:03.179355    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.179917    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181530    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181944    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.183460    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:03.179355    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.179917    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181530    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181944    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.183460    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:03.187140 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:03.187153 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:03.207770 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:03.207804 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:03.244606 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:03.244642 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:03.295650 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:03.295686 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:03.351809 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:03.351844 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:03.379889 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:03.379922 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:03.406739 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:03.406767 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:03.427941 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:03.427967 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:05.948009 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:05.948483 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:05.948575 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:05.967373 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:06:05.967442 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:05.985899 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:06:05.985979 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:06.004170 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.004194 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:06.004250 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:06.022314 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:06.022386 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:06.039940 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.039963 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:06.040005 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:06.058068 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:06.058144 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:06.076569 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.076591 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:06.076631 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:06.094127 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.094153 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:06.094179 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:06.094193 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:06.119164 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:06.119195 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:06.140482 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:06.140517 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:06.190516 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:06.190551 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:06.212353 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:06.212385 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:06.248893 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:06.248919 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:06.302627 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:06.302664 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:06.329602 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:06.329633 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:06.385087 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:06.377651    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.378359    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.379718    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.380186    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.381710    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:06.377651    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.378359    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.379718    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.380186    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.381710    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:06.385113 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:06.385131 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:06.421810 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:06.421843 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:06:07.384103 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:09.883971 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:08.941210 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:06:11.884134 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:14.383873 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:13.941780 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:06:13.941906 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:13.960880 2163332 logs.go:282] 2 containers: [806e7ebaaed1 649f5e5c295c]
	I0804 10:06:13.960962 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:13.979358 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:13.979441 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:13.996946 2163332 logs.go:282] 0 containers: []
	W0804 10:06:13.996972 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:13.997025 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:14.015595 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:14.015668 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:14.034223 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.034246 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:14.034288 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:14.052124 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:14.052200 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:14.069965 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.069989 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:14.070032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:14.088436 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.088459 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:14.088473 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:14.088503 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:14.146648 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:14.146701 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:14.173008 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:14.173051 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 10:06:16.031588 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:06:16.384007 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:19.693397 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:06:20.978525 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:06:28.857368 2163332 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (14.684287631s)
	W0804 10:06:28.857442 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:24.221601    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:06:28.850442    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49502->[::1]:8443: read: connection reset by peer"
	E0804 10:06:28.851023    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.852675    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.853078    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:24.221601    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:06:28.850442    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49502->[::1]:8443: read: connection reset by peer"
	E0804 10:06:28.851023    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.852675    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.853078    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:28.857455 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:28.857466 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:28.857477 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.825848081s)
	W0804 10:06:28.857515 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49512->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:06:28.857580 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.164140796s)
	W0804 10:06:28.857620 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:06:28.857662 2163332 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49512->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W0804 10:06:28.857709 2163332 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:06:28.857875 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.879306724s)
	W0804 10:06:28.857914 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:06:28.857989 2163332 out.go:270] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:06:28.860496 2163332 out.go:177] * Enabled addons: 
	W0804 10:06:28.885498 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:06:28.861918 2163332 addons.go:514] duration metric: took 1m45.560958591s for enable addons: enabled=[]
	I0804 10:06:28.878501 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:28.878527 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:28.917388 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:28.917421 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:28.938499 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:28.938540 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:28.979902 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:28.979935 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:29.005867 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:29.005903 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	W0804 10:06:29.025877 2163332 logs.go:130] failed kube-apiserver [649f5e5c295c]: command: /bin/bash -c "docker logs --tail 400 649f5e5c295c" /bin/bash -c "docker logs --tail 400 649f5e5c295c": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 649f5e5c295c
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 649f5e5c295c
	
	** /stderr **
	I0804 10:06:29.025904 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:29.025916 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:29.076718 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:29.076759 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:31.597358 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:31.597799 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:31.597939 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:31.617008 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:31.617067 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:31.635937 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:31.636004 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:31.654450 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.654474 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:31.654531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:31.673162 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:31.673288 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:31.690681 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.690706 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:31.690759 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:31.712018 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:31.712111 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:31.729547 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.729576 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:31.729625 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:31.747479 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.747501 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:31.747513 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:31.747525 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:31.773882 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:31.773913 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:31.828620 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:31.821229    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.821688    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823253    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823731    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.825214    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:31.821229    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.821688    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823253    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823731    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.825214    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:31.828641 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:31.828655 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:31.854157 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:31.854190 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:31.873980 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:31.874004 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:31.910304 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:31.910342 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:31.931218 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:31.931246 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:31.969061 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:31.969091 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:32.019399 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:32.019436 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:32.040462 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:32.040488 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:32.059511 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:32.059540 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:34.622382 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:34.622843 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:34.622941 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:34.642832 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:34.642895 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:34.660588 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:34.660660 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:34.678855 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.678878 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:34.678922 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:34.698191 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:34.698282 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:34.716571 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.716593 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:34.716636 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:34.735252 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:34.735339 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:34.755152 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.755181 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:34.755230 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:34.773441 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.773472 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:34.773488 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:34.773500 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:34.793528 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:34.793556 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:34.812435 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:34.812465 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:34.837875 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:34.837905 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:34.858757 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:34.858786 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:34.878587 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:34.878614 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:34.916360 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:34.916391 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:34.982416 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:34.982452 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:35.039762 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:35.031976    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.032521    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034096    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034545    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.036090    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:35.031976    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.032521    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034096    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034545    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.036090    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:35.039782 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:35.039796 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:35.066299 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:35.066330 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:35.104670 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:35.104700 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:37.656360 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:37.656872 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:37.656969 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:37.675825 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:37.675894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	W0804 10:06:38.886603 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:06:37.694962 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:37.695023 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:37.712658 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.712684 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:37.712735 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:37.730728 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:37.730800 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:37.748576 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.748598 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:37.748640 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:37.767923 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:37.768007 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:37.785275 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.785298 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:37.785347 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:37.801999 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.802024 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:37.802055 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:37.802067 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:37.839050 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:37.839076 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:37.907098 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:37.907134 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:37.962875 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:37.955444    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.955922    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957526    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957895    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.959476    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:37.955444    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.955922    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957526    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957895    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.959476    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:37.962896 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:37.962916 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:37.988976 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:37.989004 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:38.011096 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:38.011124 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:38.049631 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:38.049661 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:38.102092 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:38.102126 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:38.124479 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:38.124506 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:38.144973 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:38.145000 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:38.170919 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:38.170951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:40.690387 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:40.690843 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:40.690940 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:40.710160 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:40.710230 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:40.727856 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:40.727940 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:40.745578 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.745605 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:40.745648 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:40.763453 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:40.763516 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:40.781764 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.781788 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:40.781839 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:40.799938 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:40.800013 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:40.817161 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.817187 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:40.817260 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:40.835239 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.835260 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:40.835279 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:40.835293 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:40.855149 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:40.855177 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:40.922877 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:40.922913 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:40.978296 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:40.970913    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.971466    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973009    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973412    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.974964    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:40.970913    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.971466    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973009    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973412    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.974964    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:40.978318 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:40.978339 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:41.004175 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:41.004205 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:41.025025 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:41.025053 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:41.061373 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:41.061413 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:41.087250 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:41.087278 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:41.107920 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:41.107947 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:41.148907 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:41.148937 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	W0804 10:06:41.383817 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:43.384045 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:43.699853 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:43.700314 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:43.700416 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:43.719695 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:43.719771 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:43.738313 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:43.738403 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:43.756507 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.756531 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:43.756574 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:43.775263 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:43.775363 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:43.793071 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.793109 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:43.793177 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:43.811134 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:43.811231 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:43.828955 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.828978 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:43.829038 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:43.847773 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.847793 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:43.847819 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:43.847831 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:43.873624 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:43.873653 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:43.894310 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:43.894337 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:43.945563 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:43.945599 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:43.966435 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:43.966465 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:43.984864 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:43.984889 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:44.024156 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:44.024192 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:44.060624 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:44.060652 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:44.125956 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:44.125999 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:44.152471 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:44.152508 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:44.207960 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:44.200436    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.200919    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202422    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202839    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.204356    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:44.200436    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.200919    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202422    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202839    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.204356    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:46.709332 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:46.709781 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:46.709868 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:46.729464 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:46.729567 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:46.748548 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:46.748644 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:46.766962 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.766986 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:46.767041 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:46.786525 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:46.786603 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:46.804285 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.804311 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:46.804360 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:46.822116 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:46.822209 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:46.839501 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.839530 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:46.839575 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:46.856689 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.856711 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:46.856728 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:46.856739 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:46.895336 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:46.895370 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:46.946627 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:46.946659 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:46.967302 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:46.967329 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:46.985945 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:46.985972 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:47.022376 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:47.022405 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:47.077558 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:47.069979    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.070438    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072002    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072443    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.074016    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:47.069979    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.070438    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072002    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072443    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.074016    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:47.077593 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:47.077609 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:47.097426 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:47.097453 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:47.160540 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:47.160577 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:47.186584 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:47.186612 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:06:45.883271 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:47.883345 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:49.883713 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:49.713880 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:49.714344 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:49.714431 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:49.732944 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:49.733002 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:49.751052 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:49.751129 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:49.769185 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.769207 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:49.769272 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:49.787184 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:49.787250 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:49.804791 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.804809 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:49.804849 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:49.823604 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:49.823673 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:49.840745 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.840766 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:49.840809 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:49.857681 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.857709 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:49.857729 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:49.857743 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:49.908402 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:49.908439 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:49.930280 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:49.930305 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:49.950867 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:49.950895 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:50.018519 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:50.018562 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:50.044619 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:50.044647 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:50.100753 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:50.092922    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.093459    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095094    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095578    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.097081    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:50.092922    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.093459    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095094    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095578    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.097081    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:50.100777 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:50.100793 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:50.125943 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:50.125970 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:50.146091 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:50.146117 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:50.181714 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:50.181742 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	W0804 10:06:52.383197 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:54.383379 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:52.721516 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:52.721956 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:52.722053 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:52.741758 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:52.741819 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:52.760560 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:52.760637 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:52.778049 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.778071 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:52.778133 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:52.796442 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:52.796515 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:52.813403 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.813433 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:52.813486 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:52.831370 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:52.831443 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:52.850355 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.850377 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:52.850418 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:52.868304 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.868329 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:52.868348 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:52.868362 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:52.909679 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:52.909712 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:52.959826 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:52.959860 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:52.980766 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:52.980792 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:53.000093 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:53.000123 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:53.066024 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:53.066063 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:53.122172 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:53.114825    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.115397    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.116943    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.117412    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.118938    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:53.114825    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.115397    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.116943    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.117412    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.118938    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:53.122200 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:53.122218 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:53.158613 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:53.158651 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:53.184392 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:53.184422 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:53.209845 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:53.209873 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:55.732938 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:55.733375 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:55.733476 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:55.752276 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:55.752356 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:55.770674 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:55.770750 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:55.788757 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.788778 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:55.788823 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:55.806924 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:55.806986 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:55.824084 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.824105 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:55.824163 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:55.842106 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:55.842195 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:55.859348 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.859376 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:55.859429 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:55.876943 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.876972 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:55.876990 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:55.877001 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:55.903338 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:55.903372 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:55.924802 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:55.924829 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:55.980125 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:55.972792    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.973342    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.974941    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.975429    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.976926    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:55.972792    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.973342    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.974941    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.975429    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.976926    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:55.980146 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:55.980161 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:56.000597 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:56.000622 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:56.037964 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:56.037996 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:56.088371 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:56.088407 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:56.107606 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:56.107634 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:56.143658 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:56.143689 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:56.211928 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:56.211963 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 10:06:56.383880 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:58.883846 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:58.738791 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:58.739253 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:58.739345 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:58.758672 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:58.758750 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:58.778125 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:58.778188 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:58.795601 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.795623 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:58.795675 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:58.814211 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:58.814275 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:58.831764 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.831790 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:58.831834 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:58.849466 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:58.849539 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:58.867398 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.867427 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:58.867484 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:58.885191 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.885215 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:58.885234 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:58.885262 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:58.911583 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:58.911610 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:58.950860 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:58.950893 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:59.004297 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:59.004333 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:59.025861 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:59.025889 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:59.046944 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:59.046973 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:59.085764 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:59.085794 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:59.158468 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:59.158508 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:59.184434 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:59.184462 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:59.239706 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:59.232043    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.232545    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234123    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234548    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.235973    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:59.232043    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.232545    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234123    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234548    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.235973    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:59.239735 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:59.239748 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:01.760780 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:01.761288 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:01.761386 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:01.781655 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:01.781741 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:01.799466 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:01.799533 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:01.817102 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.817126 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:01.817181 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:01.834957 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:01.835044 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:01.852872 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.852900 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:01.852951 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:01.870948 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:01.871014 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:01.890001 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.890026 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:01.890072 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:01.907730 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.907750 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:01.907767 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:01.907777 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:01.980222 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:01.980260 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:02.006847 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:02.006888 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:02.047297 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:02.047329 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:02.101227 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:02.101276 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:02.124099 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:02.124129 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:02.161273 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:02.161308 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:02.187147 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:02.187182 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:02.242852 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:02.235381    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.235858    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237451    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237924    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.239421    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:02.235381    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.235858    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237451    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237924    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.239421    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:02.242879 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:02.242893 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:02.264021 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:02.264048 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:07:01.383265 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:03.883186 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:04.785494 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:04.785952 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:04.786043 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:04.805356 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:04.805452 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:04.823966 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:04.824039 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:04.841949 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.841973 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:04.842019 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:04.859692 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:04.859761 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:04.877317 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.877341 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:04.877383 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:04.895958 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:04.896035 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:04.913348 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.913378 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:04.913426 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:04.931401 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.931427 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:04.931448 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:04.931461 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:04.951477 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:04.951507 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:05.001983 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:05.002019 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:05.023585 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:05.023619 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:05.044516 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:05.044549 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:05.113154 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:05.113195 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:05.170412 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:05.162898    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.163461    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165001    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165501    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.167026    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:05.162898    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.163461    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165001    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165501    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.167026    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:05.170434 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:05.170447 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:05.210151 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:05.210186 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:05.248755 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:05.248781 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:05.275317 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:05.275352 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:07:05.883315 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:07.884030 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:10.383933 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:07.801587 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:07.802063 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:07.802166 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:07.821137 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:07.821214 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:07.839463 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:07.839532 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:07.856871 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.856893 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:07.856938 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:07.875060 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:07.875136 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:07.896448 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.896477 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:07.896537 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:07.914334 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:07.914402 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:07.931616 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.931638 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:07.931680 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:07.950247 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.950268 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:07.950285 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:07.950295 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:07.974572 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:07.974603 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:07.994800 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:07.994827 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:08.013535 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:08.013565 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:08.048711 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:08.048738 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:08.075000 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:08.075029 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:08.095656 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:08.095681 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:08.135706 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:08.135742 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:08.189749 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:08.189780 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:08.264988 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:08.265028 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:08.321799 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:08.314236    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.314718    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316206    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316648    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.318128    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:08.314236    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.314718    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316206    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316648    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.318128    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:10.822388 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:10.822855 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:10.822962 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:10.842220 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:10.842299 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:10.860390 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:10.860467 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:10.878544 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.878567 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:10.878613 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:10.897953 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:10.898016 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:10.916393 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.916419 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:10.916474 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:10.933957 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:10.934052 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:10.951873 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.951901 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:10.951957 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:10.970046 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.970073 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:10.970101 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:10.970116 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:11.026141 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:11.018729    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.019305    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.020844    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.021228    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.022826    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:11.018729    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.019305    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.020844    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.021228    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.022826    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:11.026162 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:11.026174 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:11.052155 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:11.052183 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:11.091637 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:11.091670 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:11.142651 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:11.142684 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:11.164003 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:11.164034 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:11.200186 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:11.200214 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:11.270805 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:11.270846 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:11.297260 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:11.297295 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:11.318423 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:11.318449 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:07:12.883177 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:15.383259 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:13.838395 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:13.838840 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:13.838937 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:13.858880 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:13.858955 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:13.877417 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:13.877476 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:13.895850 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.895876 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:13.895919 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:13.914237 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:13.914304 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:13.932185 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.932214 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:13.932265 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:13.949806 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:13.949876 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:13.966753 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.966779 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:13.966837 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:13.984061 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.984080 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:13.984103 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:13.984118 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:14.024518 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:14.024551 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:14.075810 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:14.075839 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:14.096801 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:14.096835 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:14.134271 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:14.134298 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:14.210356 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:14.210398 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:14.266888 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:14.259329    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.259828    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.261517    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.262045    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.263609    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:14.259329    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.259828    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.261517    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.262045    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.263609    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:14.266911 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:14.266931 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:14.286729 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:14.286765 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:14.312819 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:14.312853 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:14.339716 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:14.339746 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:16.861870 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:16.862360 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:16.862459 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:16.882051 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:16.882134 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:16.900321 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:16.900401 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:16.917983 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.918006 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:16.918057 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:16.935570 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:16.935650 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:16.953434 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.953455 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:16.953497 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:16.971207 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:16.971281 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:16.989882 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.989911 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:16.989957 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:17.006985 2163332 logs.go:282] 0 containers: []
	W0804 10:07:17.007007 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:17.007022 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:17.007034 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:17.081700 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:17.081741 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:17.107769 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:17.107798 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:17.129048 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:17.129074 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:17.170571 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:17.170601 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:17.190971 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:17.191000 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:17.227194 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:17.227225 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:17.283198 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:17.275311    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.275794    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277411    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277858    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.279344    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:17.275311    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.275794    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277411    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277858    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.279344    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:17.283220 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:17.283236 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:17.309760 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:17.309789 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:17.358841 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:17.358871 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:07:17.383386 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:19.383988 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:19.880139 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:19.880622 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:19.880709 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:19.901098 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:19.901189 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:19.921388 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:19.921455 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:19.941720 2163332 logs.go:282] 0 containers: []
	W0804 10:07:19.941751 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:19.941808 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:19.963719 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:19.963807 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:19.982285 2163332 logs.go:282] 0 containers: []
	W0804 10:07:19.982315 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:19.982375 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:20.005165 2163332 logs.go:282] 2 containers: [db8e2ca87b17 5321aae275b7]
	I0804 10:07:20.005272 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:20.024272 2163332 logs.go:282] 0 containers: []
	W0804 10:07:20.024296 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:20.024349 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:20.066617 2163332 logs.go:282] 0 containers: []
	W0804 10:07:20.066648 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:20.066662 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:20.066674 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:21.883344 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:23.883950 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:26.383273 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:28.383629 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:30.384083 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:32.883295 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:34.883588 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:37.383240 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:39.383490 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:41.805018 2163332 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.738325489s)
	W0804 10:07:41.805054 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:30.119105    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:40.119975    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:41.799069    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:59078->[::1]:8443: read: connection reset by peer"
	E0804 10:07:41.799640    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:41.801276    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:30.119105    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:40.119975    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:41.799069    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:59078->[::1]:8443: read: connection reset by peer"
	E0804 10:07:41.799640    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:41.801276    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:41.805062 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:41.805073 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	W0804 10:07:41.824568 2163332 logs.go:130] failed etcd [62ad65a28324]: command: /bin/bash -c "docker logs --tail 400 62ad65a28324" /bin/bash -c "docker logs --tail 400 62ad65a28324": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 62ad65a28324
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 62ad65a28324
	
	** /stderr **
	I0804 10:07:41.824590 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:41.824606 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:41.866655 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:41.866687 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:41.918542 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:41.918580 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:41.940196 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:41.940228 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:41.980124 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:41.980151 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:07:41.999188 2163332 logs.go:130] failed kube-apiserver [806e7ebaaed1]: command: /bin/bash -c "docker logs --tail 400 806e7ebaaed1" /bin/bash -c "docker logs --tail 400 806e7ebaaed1": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 806e7ebaaed1
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 806e7ebaaed1
	
	** /stderr **
	I0804 10:07:41.999208 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:41.999222 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:42.021383 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:42.021413 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	W0804 10:07:42.040097 2163332 logs.go:130] failed kube-controller-manager [5321aae275b7]: command: /bin/bash -c "docker logs --tail 400 5321aae275b7" /bin/bash -c "docker logs --tail 400 5321aae275b7": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 5321aae275b7
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 5321aae275b7
	
	** /stderr **
	I0804 10:07:42.040121 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:42.040140 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:42.121467 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:42.121517 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 10:07:41.384132 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:43.883489 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:44.649035 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:44.649550 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:44.649655 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:44.668446 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:44.668531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:44.686095 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:44.686171 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:44.705643 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.705669 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:44.705736 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:44.724574 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:44.724643 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:44.743534 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.743556 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:44.743599 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:44.762338 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:44.762422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:44.782440 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.782464 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:44.782511 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:44.800457 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.800482 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:44.800503 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:44.800519 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:44.828987 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:44.829024 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:44.851349 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:44.851380 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:44.891887 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:44.891921 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:44.942771 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:44.942809 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:44.963910 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:44.963936 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:44.982991 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:44.983018 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:45.019697 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:45.019724 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:45.098143 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:45.098181 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:45.156899 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:45.149340    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.149889    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151529    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151954    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.153458    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:45.149340    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.149889    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151529    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151954    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.153458    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:45.156923 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:45.156936 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:47.685272 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:47.685730 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:47.685821 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W0804 10:07:45.884049 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:48.383460 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:50.384087 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:47.705698 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:47.705776 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:47.723486 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:47.723559 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:47.740254 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.740277 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:47.740328 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:47.758844 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:47.758912 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:47.776147 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.776169 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:47.776209 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:47.794049 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:47.794120 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:47.810872 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.810892 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:47.810933 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:47.828618 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.828639 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:47.828655 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:47.828665 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:47.884561 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:47.876612    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.877177    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.878713    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.879149    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.880641    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:47.876612    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.877177    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.878713    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.879149    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.880641    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:47.884591 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:47.884608 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:47.910602 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:47.910632 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:47.931635 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:47.931662 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:47.974664 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:47.974698 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:48.026673 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:48.026707 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:48.047596 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:48.047624 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:48.084322 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:48.084354 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:48.162716 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:48.162754 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:48.189072 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:48.189103 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:50.709307 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:50.709704 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:50.709797 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:50.728631 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:50.728711 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:50.747056 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:50.747128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:50.764837 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.764861 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:50.764907 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:50.783351 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:50.783422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:50.801048 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.801068 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:50.801112 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:50.819524 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:50.819605 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:50.837558 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.837583 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:50.837635 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:50.855272 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.855300 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:50.855315 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:50.855334 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:50.875612 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:50.875640 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:50.895850 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:50.895876 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:50.976003 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:50.976045 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:51.002688 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:51.002724 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:51.045612 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:51.045644 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:51.098299 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:51.098331 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:51.135309 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:51.135342 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:51.191580 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:51.183846    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.184481    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186082    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186483    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.188015    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:51.183846    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.184481    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186082    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186483    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.188015    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:51.191601 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:51.191615 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:51.218895 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:51.218923 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	W0804 10:07:52.883308 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:54.883712 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:53.739326 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:53.739815 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:53.739915 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:53.760078 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:53.760152 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:53.778771 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:53.778848 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:53.796996 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.797026 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:53.797075 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:53.815962 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:53.816032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:53.833919 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.833942 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:53.833991 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:53.852829 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:53.852894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:53.870544 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.870572 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:53.870620 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:53.888900 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.888923 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:53.888941 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:53.888954 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:53.909456 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:53.909482 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:53.959416 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:53.959451 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:53.979376 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:53.979406 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:54.015365 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:54.015393 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:54.092580 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:54.092627 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:54.119325 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:54.119436 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:54.178242 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:54.170338    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.171010    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172560    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172976    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.174509    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:54.170338    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.171010    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172560    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172976    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.174509    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:54.178266 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:54.178288 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:54.205571 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:54.205602 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:54.226781 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:54.226811 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:56.772513 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:56.773019 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:56.773137 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:56.792596 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:56.792666 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:56.810823 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:56.810896 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:56.828450 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.828480 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:56.828532 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:56.847167 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:56.847237 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:56.866291 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.866315 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:56.866358 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:56.884828 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:56.884907 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:56.905059 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.905088 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:56.905134 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:56.923381 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.923417 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:56.923435 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:56.923447 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:56.943931 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:56.943957 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:56.986803 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:56.986835 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:57.013326 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:57.013360 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:57.068200 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:57.060866    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.061398    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.062981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.063498    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.064981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:57.060866    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.061398    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.062981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.063498    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.064981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:57.068220 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:57.068232 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:57.093915 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:57.093943 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:57.144935 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:57.144969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:57.166788 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:57.166813 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:57.188225 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:57.188254 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:57.224405 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:57.224433 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 10:07:56.883778 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:59.383176 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:59.805597 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:59.806058 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:59.806152 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:59.824866 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:59.824944 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:59.843663 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:59.843753 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:59.861286 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.861306 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:59.861356 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:59.880494 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:59.880573 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:59.898827 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.898851 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:59.898894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:59.917517 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:59.917584 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:59.935879 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.935906 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:59.935963 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:59.954233 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.954264 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:59.954284 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:59.954302 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:59.980238 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:59.980271 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:00.037175 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:00.029528    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.030067    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.031620    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.032023    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.033553    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:00.029528    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.030067    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.031620    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.032023    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.033553    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:00.037200 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:00.037215 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:00.079854 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:00.079889 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:00.117813 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:00.117842 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:00.199625 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:00.199671 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:00.225938 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:00.225969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:00.246825 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:00.246857 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:00.300311 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:00.300362 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:00.322075 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:00.322105 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:08:01.383269 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:02.842602 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:02.843031 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:02.843128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:02.862419 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:02.862503 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:02.881322 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:02.881409 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:02.902962 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.902986 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:02.903039 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:02.922238 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:02.922315 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:02.940312 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.940340 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:02.940391 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:02.960494 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:02.960580 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:02.978877 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.978915 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:02.978977 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:02.996894 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.996918 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:02.996937 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:02.996951 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:03.060369 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:03.060412 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:03.100294 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:03.100320 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:03.128232 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:03.128269 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:03.149215 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:03.149276 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:03.168809 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:03.168839 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:03.244969 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:03.245019 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:03.302519 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:03.294536    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.295054    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.296664    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.297129    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.298652    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:03.294536    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.295054    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.296664    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.297129    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.298652    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:03.302541 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:03.302555 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:03.328592 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:03.328621 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:03.349409 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:03.349436 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:05.892519 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:05.892926 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:05.893018 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:05.912863 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:05.912930 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:05.931765 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:05.931842 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:05.949624 2163332 logs.go:282] 0 containers: []
	W0804 10:08:05.949651 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:05.949706 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:05.969017 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:05.969096 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:05.987253 2163332 logs.go:282] 0 containers: []
	W0804 10:08:05.987279 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:05.987338 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:06.006096 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:06.006174 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:06.023866 2163332 logs.go:282] 0 containers: []
	W0804 10:08:06.023898 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:06.023955 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:06.041554 2163332 logs.go:282] 0 containers: []
	W0804 10:08:06.041574 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:06.041592 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:06.041603 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:06.078088 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:06.078114 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:06.160862 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:06.160907 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:06.187395 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:06.187425 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:06.243359 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:06.235931    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.236430    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.237921    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.238444    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.239969    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:06.235931    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.236430    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.237921    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.238444    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.239969    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:06.243387 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:06.243404 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:06.269689 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:06.269719 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:06.290404 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:06.290435 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:06.310595 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:06.310619 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:06.330304 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:06.330331 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:06.372930 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:06.372969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:08.923937 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:08.924354 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:08.924450 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:08.943688 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:08.943758 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:08.963008 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:08.963079 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:08.981372 2163332 logs.go:282] 0 containers: []
	W0804 10:08:08.981400 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:08.981453 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:08.999509 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:08.999592 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:09.017857 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.017881 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:09.017930 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:09.036581 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:09.036643 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:09.054584 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.054613 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:09.054666 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:09.072888 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.072924 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:09.072949 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:09.072965 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:09.149606 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:09.149645 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:09.178148 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:09.178185 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:09.222507 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:09.222544 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:09.275195 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:09.275235 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:09.299125 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:09.299159 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:09.319703 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:09.319747 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:09.346880 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:09.346922 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:09.404327 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:09.396630    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.397126    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.398704    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.399191    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.400813    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:09.396630    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.397126    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.398704    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.399191    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.400813    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:09.404352 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:09.404367 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:09.425425 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:09.425452 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:11.963472 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:11.963939 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:11.964032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:11.983012 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:11.983080 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:12.001567 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:12.001629 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:12.019335 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.019361 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:12.019428 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:12.038818 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:12.038893 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:12.056951 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.056978 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:12.057022 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:12.075232 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:12.075305 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:12.092737 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.092758 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:12.092800 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:12.109994 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.110024 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:12.110044 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:12.110055 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:12.166801 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:12.158687   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.159257   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.160910   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.161382   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.162961   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:12.158687   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.159257   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.160910   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.161382   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.162961   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:12.166825 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:12.166842 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:12.192505 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:12.192533 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:12.213260 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:12.213294 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:12.234230 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:12.234264 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:12.254032 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:12.254068 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:12.336496 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:12.336538 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:12.362829 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:12.362860 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:12.404783 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:12.404822 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:12.456932 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:12.456963 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:08:12.885483 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:08:14.998006 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:14.998459 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:14.998558 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:15.018639 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:15.018726 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:15.037594 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:15.037664 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:15.055647 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.055675 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:15.055720 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:15.073464 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:15.073538 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:15.091563 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.091588 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:15.091636 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:15.110381 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:15.110457 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:15.128744 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.128766 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:15.128811 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:15.147315 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.147336 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:15.147350 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:15.147369 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:15.167872 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:15.167908 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:15.211657 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:15.211690 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:15.233001 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:15.233026 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:15.252541 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:15.252580 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:15.291017 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:15.291044 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:15.316967 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:15.317004 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:15.343514 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:15.343543 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:15.394164 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:15.394201 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:15.475808 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:15.475847 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:15.532790 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:15.525410   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.525962   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527526   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527890   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.529344   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:15.525410   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.525962   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527526   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527890   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.529344   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:18.033614 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:18.034099 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:18.034190 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:18.053426 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:18.053519 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:18.072396 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:18.072461 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:18.090428 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.090453 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:18.090519 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:18.109580 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:18.109661 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:18.127869 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.127899 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:18.127954 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:18.146622 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:18.146695 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:18.165973 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.165995 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:18.166038 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:18.183152 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.183175 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:18.183190 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:18.183204 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:18.239841 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:18.232099   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.232612   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234166   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234591   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.236113   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:18.232099   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.232612   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234166   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234591   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.236113   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:18.239862 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:18.239874 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:18.260920 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:18.260946 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:18.304135 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:18.304170 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:18.356641 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:18.356679 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:18.376311 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:18.376341 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:18.460920 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:18.460965 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:18.488725 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:18.488755 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:18.509858 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:18.509894 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:18.546219 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:18.546248 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:21.073317 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:21.073860 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:21.073971 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:21.093222 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:21.093346 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:21.111951 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:21.112042 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:21.130287 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.130308 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:21.130359 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:21.148384 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:21.148471 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:21.166576 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.166604 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:21.166652 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:21.185348 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:21.185427 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:21.203596 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.203622 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:21.203681 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:21.221592 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.221620 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:21.221640 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:21.221652 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:21.277441 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:21.269692   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.270305   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.271725   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.272213   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.273739   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:21.269692   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.270305   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.271725   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.272213   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.273739   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:21.277466 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:21.277482 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:21.298481 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:21.298511 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:21.350381 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:21.350418 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:21.371474 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:21.371501 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:21.408284 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:21.408313 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:21.485994 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:21.486031 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:21.512310 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:21.512339 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:21.539196 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:21.539228 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:21.581887 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:21.581920 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:08:22.886436 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	W0804 10:08:25.383211 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:24.102885 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:24.103356 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:24.103454 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:24.123078 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:24.123144 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:24.141483 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:24.141545 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:24.159538 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.159565 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:24.159610 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:24.177499 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:24.177574 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:24.195218 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.195246 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:24.195289 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:24.213410 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:24.213501 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:24.231595 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.231619 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:24.231675 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:24.250451 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.250478 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:24.250497 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:24.250511 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:24.269653 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:24.269681 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:24.348982 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:24.349027 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:24.405452 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:24.397972   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.398529   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400132   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400600   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.402109   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:24.397972   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.398529   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400132   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400600   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.402109   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:24.405476 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:24.405491 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:24.431565 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:24.431593 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:24.469920 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:24.469948 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:24.495911 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:24.495942 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:24.516767 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:24.516796 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:24.559809 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:24.559846 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:24.612215 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:24.612251 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:27.134399 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:27.134902 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:27.135016 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:27.154460 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:27.154526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:27.172467 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:27.172537 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:27.190547 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.190571 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:27.190626 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:27.208406 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:27.208478 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:27.226270 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.226293 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:27.226347 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:27.244648 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:27.244710 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:27.262363 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.262384 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:27.262429 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:27.280761 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.280791 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:27.280811 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:27.280828 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:27.337516 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:27.329752   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.330367   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.331865   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.332331   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.333862   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:27.329752   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.330367   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.331865   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.332331   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.333862   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:27.337538 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:27.337554 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:27.383205 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:27.383237 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:27.402831 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:27.402863 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:27.439987 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:27.440016 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:27.467188 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:27.467220 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:27.488626 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:27.488651 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:27.538307 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:27.538341 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:27.558848 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:27.558875 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:27.640317 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:27.640360 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 10:08:27.383261 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:29.883318 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:30.169015 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:30.169492 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:30.169591 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:30.188919 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:30.189000 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:30.208903 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:30.208986 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:30.226974 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.227006 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:30.227061 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:30.245555 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:30.245625 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:30.263987 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.264013 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:30.264059 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:30.282944 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:30.283023 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:30.301744 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.301773 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:30.301834 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:30.320893 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.320919 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:30.320936 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:30.320951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:30.397888 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:30.397925 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:30.418812 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:30.418837 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:30.464089 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:30.464123 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:30.484745 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:30.484778 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:30.504805 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:30.504837 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:30.530475 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:30.530511 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:30.586445 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:30.578622   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.579233   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.580788   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.581197   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.582760   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:30.578622   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.579233   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.580788   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.581197   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.582760   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:30.586465 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:30.586478 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:30.613024 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:30.613054 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:30.666024 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:30.666060 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:08:31.883721 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:34.383160 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:33.203579 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:33.204060 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:33.204180 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:33.223272 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:33.223341 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:33.242111 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:33.242191 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:33.260564 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.260587 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:33.260632 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:33.279120 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:33.279198 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:33.297558 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.297581 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:33.297626 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:33.315911 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:33.315987 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:33.334504 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.334534 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:33.334594 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:33.352831 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.352855 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:33.352876 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:33.352891 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:33.431146 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:33.431188 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:33.457483 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:33.457516 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:33.512587 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:33.505280   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.505794   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507387   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507829   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.509409   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:33.505280   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.505794   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507387   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507829   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.509409   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:33.512614 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:33.512630 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:33.563154 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:33.563186 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:33.584703 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:33.584730 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:33.603831 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:33.603862 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:33.641549 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:33.641579 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:33.667027 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:33.667056 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:33.688258 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:33.688291 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:36.234388 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:36.234842 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:36.234932 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:36.253452 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:36.253531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:36.272517 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:36.272578 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:36.290793 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.290815 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:36.290859 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:36.309868 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:36.309951 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:36.328038 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.328065 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:36.328128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:36.346447 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:36.346526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:36.364698 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.364720 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:36.364774 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:36.382618 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.382649 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:36.382672 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:36.382687 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:36.460757 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:36.460795 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:36.517181 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:36.509281   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.509826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511400   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.513375   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:36.509281   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.509826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511400   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.513375   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:36.517202 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:36.517218 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:36.570857 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:36.570896 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:36.590896 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:36.590929 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:36.616290 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:36.616323 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:36.643271 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:36.643298 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:36.663678 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:36.663704 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:36.708665 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:36.708695 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:36.729524 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:36.729551 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:08:36.383928 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:38.883516 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:39.267469 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:39.267990 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:39.268120 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:39.287780 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:39.287877 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:39.307153 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:39.307248 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:39.326719 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.326752 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:39.326810 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:39.345319 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:39.345387 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:39.363424 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.363455 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:39.363511 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:39.381746 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:39.381825 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:39.399785 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.399809 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:39.399862 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:39.419064 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.419095 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:39.419121 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:39.419136 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:39.501950 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:39.501998 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:39.528491 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:39.528525 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:39.585466 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:39.578061   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.578577   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580045   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580462   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.581949   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:39.578061   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.578577   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580045   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580462   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.581949   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:39.585497 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:39.585518 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:39.611559 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:39.611590 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:39.632402 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:39.632438 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:39.677721 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:39.677758 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:39.728453 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:39.728487 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:39.752029 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:39.752060 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:39.772376 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:39.772408 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:42.311175 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:42.311726 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:42.311836 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:42.331694 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:42.331761 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:42.350128 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:42.350202 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:42.368335 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.368358 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:42.368411 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:42.385942 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:42.386020 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:42.403768 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.403788 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:42.403840 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:42.422612 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:42.422679 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:42.439585 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.439609 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:42.439659 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:42.457208 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.457229 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:42.457263 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:42.457279 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:42.535545 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:42.535578 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:42.561612 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:42.561641 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:42.616811 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:42.609048   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.609673   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611215   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611642   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.613094   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:42.609048   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.609673   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611215   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611642   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.613094   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:42.616832 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:42.616847 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:42.643211 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:42.643240 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:42.663882 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:42.663910 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:42.683025 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:42.683052 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:42.722746 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:42.722772 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:42.743550 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:42.743589 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:42.788986 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:42.789023 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:45.340596 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:45.341080 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:45.343076 2163332 out.go:201] 
	W0804 10:08:45.344232 2163332 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0804 10:08:45.344248 2163332 out.go:270] * 
	W0804 10:08:45.346020 2163332 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 10:08:45.347852 2163332 out.go:201] 
	W0804 10:08:40.883920 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:42.884060 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:45.384074 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:47.883235 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:50.383116 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:52.383162 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:54.383410 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:56.383810 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:58.883290 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:00.883650 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:03.383190 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:05.383617 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:07.384051 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:09.883346 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	
	
	==> Docker <==
	Aug 04 10:04:39 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Aug 04 10:04:39 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:39Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Aug 04 10:04:39 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:39Z" level=info msg="Start cri-dockerd grpc backend"
	Aug 04 10:04:39 newest-cni-768931 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Aug 04 10:04:43 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8291adcc91b97cb252a24d35036c5efbb0996a08027e74bce7b3e0a6bf9a48cf/resolv.conf as [nameserver 192.168.76.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Aug 04 10:04:43 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2bc437b51e69e3c519e0761ce89040cfdde58b82f6e145391cd6e0c2ab5e208e/resolv.conf as [nameserver 192.168.76.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 10:04:43 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/662feb1b8623b8a2e29aa4611d37b1170731bd5f7a2dc897b5f52883c376bec1/resolv.conf as [nameserver 192.168.76.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 10:04:43 newest-cni-768931 cri-dockerd[1369]: time="2025-08-04T10:04:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4c205ed51dffe9b5b86784e923411ac6c4cd45de2c5e2e4648ad44b601456c17/resolv.conf as [nameserver 192.168.76.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Aug 04 10:04:44 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:04:44.183658975Z" level=info msg="ignoring event" container=cf7f705039858fd1e9136035e31987c37daa6edfab66c046bf64e03096b58692 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:02 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:05:02.012772715Z" level=info msg="ignoring event" container=2d096260eba4cf41bd065888c7f500814d5de630a1b1fc361f3947127b35e4fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:05 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:05:05.203874486Z" level=info msg="ignoring event" container=059756d38779c9ce2222befd10f7581bfad8f269e0d6bfe172215d53cbd82572 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:06 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:05:06.230520357Z" level=info msg="ignoring event" container=e3a6308944b3d968179e3c495ba3e3438fbf285b19cf9bbf07d2965692300547 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:30 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:05:30.005635105Z" level=info msg="ignoring event" container=bf239ceabd3147fe0e012eb9801492d77876a7ddd93fc0159b21dd207d7c3afc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:43 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:05:43.942876680Z" level=info msg="ignoring event" container=649f5e5c295c89600065ff6074421cadc3ed95db0690cfcfe15ce4a3ac4ac6db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:44 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:05:44.965198998Z" level=info msg="ignoring event" container=69f71bfef17b06cc8a5dc342463c94500db45e0e165608d96196bb1b17386196 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:06:12 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:06:12.006979008Z" level=info msg="ignoring event" container=62ad65a28324db44aec25b62a7b821e13717955c2910052ef5c10903fccd8507 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:06:27 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:06:27.859049038Z" level=info msg="ignoring event" container=806e7ebaaed1d1e4b1ed1116680ed33d3a9dc5d38319656b66d38586e6c02dea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:06:38 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:06:38.884007403Z" level=info msg="ignoring event" container=5321aae275b78662386b9386b19106ba3fd44d1c6a82e71ef1952c2c46335d24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:07:35 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:07:35.009931999Z" level=info msg="ignoring event" container=1f24d4315f70231c2695d277a5b8b9d24336254281ca6e077105280d5e5f618f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:07:40 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:07:40.764373891Z" level=info msg="ignoring event" container=db8e2ca87b17366e2e40aa7f7717aab1abd1be0b804290d9c2836790e07bc239 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:07:40 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:07:40.807931491Z" level=info msg="ignoring event" container=546ccc0d47d3f88d8d23afa8e595ee1538bdb059d62110fe9c682afd3e017027 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:08:51 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:08:51.174324084Z" level=info msg="ignoring event" container=ba73e77719612f70c2bf982e456d9df249c6091fea00a99f39da19aa30b97400 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:09:10 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:09:10.624749210Z" level=info msg="ignoring event" container=6c8f8998a2b067db2d2efe340572f57487ad60b7119d3b66cb8ad53ecef9b764 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:09:11 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:09:11.508618473Z" level=info msg="ignoring event" container=390ff084d3a669e6950f243be8c00786d4d8c14b1f1c1caf7df9599b865d1a38 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:09:12 newest-cni-768931 dockerd[1064]: time="2025-08-04T10:09:12.528499787Z" level=info msg="ignoring event" container=38bc7e4cff02cd0b0e15379ee1f125fe25a0d6f35fcd71a3232a2969e437a3a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6c8f8998a2b06       1e30c0b1e9b99       4 seconds ago       Exited              etcd                      12                  8291adcc91b97       etcd-newest-cni-768931
	390ff084d3a66       d85eea91cc41d       24 seconds ago      Exited              kube-apiserver            10                  2bc437b51e69e       kube-apiserver-newest-cni-768931
	38bc7e4cff02c       9ad783615e1bc       24 seconds ago      Exited              kube-controller-manager   10                  4c205ed51dffe       kube-controller-manager-newest-cni-768931
	4d9bcb7668482       21d34a2aeacf5       4 minutes ago       Running             kube-scheduler            1                   662feb1b8623b       kube-scheduler-newest-cni-768931
	89bc4723825bb       21d34a2aeacf5       10 minutes ago      Exited              kube-scheduler            0                   6c135c15276d7       kube-scheduler-newest-cni-768931
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:09:14.510422   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:09:14.510912   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:09:14.512554   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:09:14.513015   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:09:14.514591   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.003976] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-30ac57a033af
	[  +0.000006] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +3.807738] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000008] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.000000] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.251962] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-30ac57a033af
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-30ac57a033af
	[  +0.000007] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.000000] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +7.935446] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000007] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000034] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.003972] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000005] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[ +23.237968] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 e9 0e 42 0b 64 08 06
	[  +0.000446] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 d5 e2 93 f6 db 08 06
	[Aug 4 10:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da a7 c8 ad 52 b3 08 06
	[  +0.000606] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff da d5 10 fe 4e 73 08 06
	
	
	==> etcd [6c8f8998a2b0] <==
	flag provided but not defined: -proxy-refresh-interval
	Usage:
	
	  etcd [flags]
	    Start an etcd server.
	
	  etcd --version
	    Show the version of etcd.
	
	  etcd -h | --help
	    Show the help information about etcd.
	
	  etcd --config-file
	    Path to the server configuration file. Note that if a configuration file is provided, other command line flags and environment variables will be ignored.
	
	  etcd gateway
	    Run the stateless pass-through etcd TCP connection forwarding proxy.
	
	  etcd grpc-proxy
	    Run the stateless etcd v3 gRPC L7 reverse proxy.
	
	
	
	==> kernel <==
	 10:09:14 up 1 day, 18:50,  0 users,  load average: 0.57, 1.24, 1.68
	Linux newest-cni-768931 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [390ff084d3a6] <==
	W0804 10:08:51.476989       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:51.477011       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 10:08:51.478158       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 10:08:51.486773       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 10:08:51.491358       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 10:08:51.491377       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 10:08:51.491602       1 instance.go:232] Using reconciler: lease
	W0804 10:08:51.492344       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:51.492357       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:52.478052       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:52.478057       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:52.492750       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:53.836793       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:54.120157       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:54.352717       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:56.454022       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:56.928715       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:08:57.299558       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:09:00.091869       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:09:00.403086       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:09:02.044821       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:09:07.699890       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:09:08.032299       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:09:08.925789       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 10:09:11.492491       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [38bc7e4cff02] <==
	I0804 10:08:51.598106       1 serving.go:386] Generated self-signed cert in-memory
	I0804 10:08:52.259759       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 10:08:52.259788       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 10:08:52.261580       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 10:08:52.261682       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 10:08:52.262305       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 10:08:52.262841       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 10:09:12.498903       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.76.2:8443/healthz\": dial tcp 192.168.76.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [4d9bcb766848] <==
	E0804 10:08:02.884075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:08:04.005685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 10:08:08.149988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:08:15.011870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 10:08:17.251091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.76.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 10:08:21.696623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 10:08:22.519039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 10:08:24.418812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:08:27.814522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:08:31.976195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 10:08:32.712898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 10:08:33.723365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:08:44.369034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 10:08:46.240912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 10:08:48.139294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:09:01.624578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 10:09:03.277040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 10:09:03.738631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 10:09:07.548697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 10:09:08.025746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 10:09:09.346339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:09:12.498492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:47086->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:09:12.498492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:45256->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 10:09:12.498514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:47080->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:09:12.498525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:47070->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	
	
	==> kube-scheduler [89bc4723825b] <==
	E0804 10:03:40.497585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:03:41.644446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.76.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 10:03:46.793027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 10:03:47.129343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 10:03:47.498649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:03:49.482712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:60970->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:03:49.482712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:43076->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 10:03:49.482712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:60974->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 10:03:49.482728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:43066->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:03:49.518652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:03:52.175953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 10:03:52.381066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 10:04:06.761064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 10:04:06.975695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 10:04:08.623458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 10:04:16.963592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 10:04:22.569447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:04:23.629502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 10:04:24.298423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:04:25.174292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 10:04:25.897947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:04:28.497132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 10:04:29.219349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 10:04:31.307534       1 server.go:274] "handlers are not fully synchronized" err="context canceled"
	E0804 10:04:31.307656       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 04 10:09:10 newest-cni-768931 kubelet[12406]: I0804 10:09:10.707530   12406 scope.go:117] "RemoveContainer" containerID="6c8f8998a2b067db2d2efe340572f57487ad60b7119d3b66cb8ad53ecef9b764"
	Aug 04 10:09:10 newest-cni-768931 kubelet[12406]: E0804 10:09:10.707691   12406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=etcd pod=etcd-newest-cni-768931_kube-system(0a578c02c1067bda6f15c5033e01f33e)\"" pod="kube-system/etcd-newest-cni-768931" podUID="0a578c02c1067bda6f15c5033e01f33e"
	Aug 04 10:09:11 newest-cni-768931 kubelet[12406]: E0804 10:09:11.497195   12406 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.76.2:8443/api/v1/nodes\": read tcp 192.168.76.2:45230->192.168.76.2:8443: read: connection reset by peer" node="newest-cni-768931"
	Aug 04 10:09:11 newest-cni-768931 kubelet[12406]: I0804 10:09:11.718129   12406 scope.go:117] "RemoveContainer" containerID="546ccc0d47d3f88d8d23afa8e595ee1538bdb059d62110fe9c682afd3e017027"
	Aug 04 10:09:11 newest-cni-768931 kubelet[12406]: E0804 10:09:11.719063   12406 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-768931\" not found" node="newest-cni-768931"
	Aug 04 10:09:11 newest-cni-768931 kubelet[12406]: I0804 10:09:11.719156   12406 scope.go:117] "RemoveContainer" containerID="390ff084d3a669e6950f243be8c00786d4d8c14b1f1c1caf7df9599b865d1a38"
	Aug 04 10:09:11 newest-cni-768931 kubelet[12406]: E0804 10:09:11.719337   12406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-newest-cni-768931_kube-system(59d53768f66016db0d7a945479ffe178)\"" pod="kube-system/kube-apiserver-newest-cni-768931" podUID="59d53768f66016db0d7a945479ffe178"
	Aug 04 10:09:11 newest-cni-768931 kubelet[12406]: E0804 10:09:11.723840   12406 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-768931\" not found" node="newest-cni-768931"
	Aug 04 10:09:11 newest-cni-768931 kubelet[12406]: I0804 10:09:11.723909   12406 scope.go:117] "RemoveContainer" containerID="6c8f8998a2b067db2d2efe340572f57487ad60b7119d3b66cb8ad53ecef9b764"
	Aug 04 10:09:11 newest-cni-768931 kubelet[12406]: E0804 10:09:11.724036   12406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=etcd pod=etcd-newest-cni-768931_kube-system(0a578c02c1067bda6f15c5033e01f33e)\"" pod="kube-system/etcd-newest-cni-768931" podUID="0a578c02c1067bda6f15c5033e01f33e"
	Aug 04 10:09:12 newest-cni-768931 kubelet[12406]: E0804 10:09:12.497788   12406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/newest-cni-768931?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:47032->192.168.76.2:8443: read: connection reset by peer" interval="3.2s"
	Aug 04 10:09:12 newest-cni-768931 kubelet[12406]: E0804 10:09:12.497846   12406 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.76.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:47066->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Aug 04 10:09:12 newest-cni-768931 kubelet[12406]: E0804 10:09:12.497856   12406 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:47052->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Aug 04 10:09:12 newest-cni-768931 kubelet[12406]: E0804 10:09:12.497871   12406 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.76.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dnewest-cni-768931&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:47050->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Aug 04 10:09:12 newest-cni-768931 kubelet[12406]: E0804 10:09:12.497953   12406 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.76.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.76.2:47042->192.168.76.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Aug 04 10:09:12 newest-cni-768931 kubelet[12406]: I0804 10:09:12.743673   12406 scope.go:117] "RemoveContainer" containerID="db8e2ca87b17366e2e40aa7f7717aab1abd1be0b804290d9c2836790e07bc239"
	Aug 04 10:09:12 newest-cni-768931 kubelet[12406]: E0804 10:09:12.744554   12406 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-768931\" not found" node="newest-cni-768931"
	Aug 04 10:09:12 newest-cni-768931 kubelet[12406]: I0804 10:09:12.744657   12406 scope.go:117] "RemoveContainer" containerID="38bc7e4cff02cd0b0e15379ee1f125fe25a0d6f35fcd71a3232a2969e437a3a5"
	Aug 04 10:09:12 newest-cni-768931 kubelet[12406]: E0804 10:09:12.744821   12406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-newest-cni-768931_kube-system(05d4f75e5879bee8e6895966620bd9b4)\"" pod="kube-system/kube-controller-manager-newest-cni-768931" podUID="05d4f75e5879bee8e6895966620bd9b4"
	Aug 04 10:09:12 newest-cni-768931 kubelet[12406]: E0804 10:09:12.945109   12406 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.76.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{newest-cni-768931.1858887e33be7b55  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:newest-cni-768931,UID:newest-cni-768931,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:newest-cni-768931,},FirstTimestamp:2025-08-04 10:08:50.476186453 +0000 UTC m=+0.059980287,LastTimestamp:2025-08-04 10:08:50.476186453 +0000 UTC m=+0.059980287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:newest-cni-768931,}"
	Aug 04 10:09:13 newest-cni-768931 kubelet[12406]: I0804 10:09:13.099356   12406 kubelet_node_status.go:75] "Attempting to register node" node="newest-cni-768931"
	Aug 04 10:09:13 newest-cni-768931 kubelet[12406]: E0804 10:09:13.099833   12406 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.76.2:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="newest-cni-768931"
	Aug 04 10:09:13 newest-cni-768931 kubelet[12406]: E0804 10:09:13.729492   12406 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-768931\" not found" node="newest-cni-768931"
	Aug 04 10:09:13 newest-cni-768931 kubelet[12406]: I0804 10:09:13.729575   12406 scope.go:117] "RemoveContainer" containerID="390ff084d3a669e6950f243be8c00786d4d8c14b1f1c1caf7df9599b865d1a38"
	Aug 04 10:09:13 newest-cni-768931 kubelet[12406]: E0804 10:09:13.729711   12406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-newest-cni-768931_kube-system(59d53768f66016db0d7a945479ffe178)\"" pod="kube-system/kube-apiserver-newest-cni-768931" podUID="59d53768f66016db0d7a945479ffe178"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-768931 -n newest-cni-768931
E0804 10:09:15.253758 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:09:15.394290 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubenet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-768931 -n newest-cni-768931: exit status 2 (268.817004ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "newest-cni-768931" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (26.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:09:50.237812 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:10:08.112529 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:10:08.403391 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:10:22.499240 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/bridge-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:10:27.079867 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubenet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:10:30.664154 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:10:41.678035 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:10:43.935017 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:10:58.366920 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:11:11.637908 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": net/http: TLS handshake timeout
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": net/http: TLS handshake timeout
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.94.1:51510->192.168.94.2:8443: read: connection reset by peer
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:11:44.421371 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/bridge-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:11:49.002126 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubenet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:11:58.364315 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:12:24.250715 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:12:24.542938 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:12:33.012896 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:12:51.954234 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:12:52.244858 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:13:21.427404 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:13:44.758279 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:13:53.891295 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:13:55.497425 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:14:00.560477 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/bridge-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:14:03.491548 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:14:05.087674 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:14:05.141053 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubenet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:14:15.254299 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:14:28.262949 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/bridge-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:14:32.844261 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubenet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:14:50.237032 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:15:30.664075 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:15:41.678803 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:15:43.934752 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:16:13.300273 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": net/http: TLS handshake timeout
E0804 10:16:58.363677 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.94.1:38652->192.168.94.2:8443: read: connection reset by peer
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:17:24.250885 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:17:24.543175 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:17:33.012460 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-499486 -n no-preload-499486
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-499486 -n no-preload-499486: exit status 2 (271.899068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-499486" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-499486
helpers_test.go:235: (dbg) docker inspect no-preload-499486:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a",
	        "Created": "2025-08-04T09:53:15.660442354Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2149831,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T10:03:35.921334492Z",
	            "FinishedAt": "2025-08-04T10:03:34.718097407Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/hostname",
	        "HostsPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/hosts",
	        "LogPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a-json.log",
	        "Name": "/no-preload-499486",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-499486:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-499486",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a",
	                "LowerDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-499486",
	                "Source": "/var/lib/docker/volumes/no-preload-499486/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-499486",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-499486",
	                "name.minikube.sigs.k8s.io": "no-preload-499486",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fac6055cf947ab02c491cbb5dd64cbf3c0ae98a2e42975ad1d99b1bdbe7a9bbd",
	            "SandboxKey": "/var/run/docker/netns/fac6055cf947",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-499486": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:00:36:b7:69:43",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b62d1a98319626f2ebd728777c7c3c44586a7c69bc74cc1eeb93ee4ca2df5d38",
	                    "EndpointID": "cd2f2866ae03228d2f1c745367746ee5866c33aa7baf64438d9f50fae785c9c7",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-499486",
	                        "cdcf9a40640c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499486 -n no-preload-499486
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499486 -n no-preload-499486: exit status 2 (261.716992ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-499486 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                          ARGS                                                                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ ssh     │ -p kubenet-561540 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                     │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo docker system info                                                                                                                                                                                                              │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                        │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                  │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cri-dockerd --version                                                                                                                                                                                                           │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat containerd --no-pager                                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                      │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /etc/containerd/config.toml                                                                                                                                                                                                 │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo containerd config dump                                                                                                                                                                                                          │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                   │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │                     │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat crio --no-pager                                                                                                                                                                                                   │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                         │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo crio config                                                                                                                                                                                                                     │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ delete  │ -p kubenet-561540                                                                                                                                                                                                                                      │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ stop    │ -p newest-cni-768931 --alsologtostderr -v=3                                                                                                                                                                                                            │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ addons  │ enable dashboard -p newest-cni-768931 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                           │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ start   │ -p newest-cni-768931 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0 │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │                     │
	│ image   │ newest-cni-768931 image list --format=json                                                                                                                                                                                                             │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:08 UTC │ 04 Aug 25 10:08 UTC │
	│ pause   │ -p newest-cni-768931 --alsologtostderr -v=1                                                                                                                                                                                                            │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:08 UTC │ 04 Aug 25 10:08 UTC │
	│ unpause │ -p newest-cni-768931 --alsologtostderr -v=1                                                                                                                                                                                                            │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:08 UTC │ 04 Aug 25 10:08 UTC │
	│ delete  │ -p newest-cni-768931                                                                                                                                                                                                                                   │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:09 UTC │ 04 Aug 25 10:09 UTC │
	│ delete  │ -p newest-cni-768931                                                                                                                                                                                                                                   │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:09 UTC │ 04 Aug 25 10:09 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 10:04:32
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 10:04:32.687485 2163332 out.go:345] Setting OutFile to fd 1 ...
	I0804 10:04:32.687601 2163332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 10:04:32.687610 2163332 out.go:358] Setting ErrFile to fd 2...
	I0804 10:04:32.687614 2163332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 10:04:32.687787 2163332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 10:04:32.688302 2163332 out.go:352] Setting JSON to false
	I0804 10:04:32.689384 2163332 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":153962,"bootTime":1754147911,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 10:04:32.689473 2163332 start.go:140] virtualization: kvm guest
	I0804 10:04:32.691276 2163332 out.go:177] * [newest-cni-768931] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 10:04:32.692852 2163332 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 10:04:32.692888 2163332 notify.go:220] Checking for updates...
	I0804 10:04:32.695015 2163332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 10:04:32.696142 2163332 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:32.697215 2163332 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 10:04:32.698321 2163332 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 10:04:32.699270 2163332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 10:04:32.700616 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:32.701052 2163332 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 10:04:32.723805 2163332 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 10:04:32.723883 2163332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 10:04:32.778232 2163332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 10:04:32.768372933 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 10:04:32.778341 2163332 docker.go:318] overlay module found
	I0804 10:04:32.779801 2163332 out.go:177] * Using the docker driver based on existing profile
	I0804 10:04:32.780788 2163332 start.go:304] selected driver: docker
	I0804 10:04:32.780822 2163332 start.go:918] validating driver "docker" against &{Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:32.780895 2163332 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 10:04:32.781839 2163332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 10:04:32.827839 2163332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 10:04:32.819484271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 10:04:32.828202 2163332 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0804 10:04:32.828229 2163332 cni.go:84] Creating CNI manager for ""
	I0804 10:04:32.828284 2163332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 10:04:32.828323 2163332 start.go:348] cluster config:
	{Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:32.830455 2163332 out.go:177] * Starting "newest-cni-768931" primary control-plane node in "newest-cni-768931" cluster
	I0804 10:04:32.831301 2163332 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 10:04:32.832264 2163332 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 10:04:32.833160 2163332 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 10:04:32.833198 2163332 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 10:04:32.833213 2163332 cache.go:56] Caching tarball of preloaded images
	I0804 10:04:32.833291 2163332 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 10:04:32.833335 2163332 preload.go:172] Found /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 10:04:32.833346 2163332 cache.go:59] Finished verifying existence of preloaded tar for v1.34.0-beta.0 on docker
	I0804 10:04:32.833466 2163332 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/config.json ...
	I0804 10:04:32.853043 2163332 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 10:04:32.853066 2163332 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 10:04:32.853089 2163332 cache.go:230] Successfully downloaded all kic artifacts
	I0804 10:04:32.853130 2163332 start.go:360] acquireMachinesLock for newest-cni-768931: {Name:mk60747b86b31a8b440009760f939cd98b70b1b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 10:04:32.853200 2163332 start.go:364] duration metric: took 46.728µs to acquireMachinesLock for "newest-cni-768931"
	I0804 10:04:32.853224 2163332 start.go:96] Skipping create...Using existing machine configuration
	I0804 10:04:32.853234 2163332 fix.go:54] fixHost starting: 
	I0804 10:04:32.853483 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:32.870192 2163332 fix.go:112] recreateIfNeeded on newest-cni-768931: state=Stopped err=<nil>
	W0804 10:04:32.870218 2163332 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 10:04:32.871722 2163332 out.go:177] * Restarting existing docker container for "newest-cni-768931" ...
	W0804 10:04:33.885027 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:04:32.872698 2163332 cli_runner.go:164] Run: docker start newest-cni-768931
	I0804 10:04:33.099718 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:33.118449 2163332 kic.go:430] container "newest-cni-768931" state is running.
	I0804 10:04:33.118905 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:33.137343 2163332 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/config.json ...
	I0804 10:04:33.137542 2163332 machine.go:93] provisionDockerMachine start ...
	I0804 10:04:33.137597 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:33.155160 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:33.155419 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:33.155437 2163332 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 10:04:33.156072 2163332 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58734->127.0.0.1:33169: read: connection reset by peer
	I0804 10:04:36.284896 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-768931
	
	I0804 10:04:36.284952 2163332 ubuntu.go:169] provisioning hostname "newest-cni-768931"
	I0804 10:04:36.285030 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.302808 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.303033 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.303047 2163332 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-768931 && echo "newest-cni-768931" | sudo tee /etc/hostname
	I0804 10:04:36.436070 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-768931
	
	I0804 10:04:36.436155 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.453360 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.453580 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.453597 2163332 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-768931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-768931/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-768931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 10:04:36.577177 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 10:04:36.577204 2163332 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 10:04:36.577269 2163332 ubuntu.go:177] setting up certificates
	I0804 10:04:36.577284 2163332 provision.go:84] configureAuth start
	I0804 10:04:36.577338 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:36.594945 2163332 provision.go:143] copyHostCerts
	I0804 10:04:36.595024 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 10:04:36.595052 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 10:04:36.595122 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 10:04:36.595229 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 10:04:36.595240 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 10:04:36.595279 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 10:04:36.595353 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 10:04:36.595363 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 10:04:36.595397 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 10:04:36.595465 2163332 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.newest-cni-768931 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-768931]
	I0804 10:04:36.675231 2163332 provision.go:177] copyRemoteCerts
	I0804 10:04:36.675299 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 10:04:36.675408 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.693281 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:36.786243 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 10:04:36.808201 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 10:04:36.829564 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 10:04:36.851320 2163332 provision.go:87] duration metric: took 274.022098ms to configureAuth
	I0804 10:04:36.851348 2163332 ubuntu.go:193] setting minikube options for container-runtime
	I0804 10:04:36.851551 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:36.851596 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.868506 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.868714 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.868725 2163332 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 10:04:36.993642 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 10:04:36.993669 2163332 ubuntu.go:71] root file system type: overlay
	I0804 10:04:36.993814 2163332 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 10:04:36.993894 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.011512 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:37.011804 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:37.011909 2163332 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 10:04:37.144143 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 10:04:37.144254 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.163872 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:37.164133 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:37.164159 2163332 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 10:04:37.294409 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 10:04:37.294438 2163332 machine.go:96] duration metric: took 4.156880869s to provisionDockerMachine
	I0804 10:04:37.294451 2163332 start.go:293] postStartSetup for "newest-cni-768931" (driver="docker")
	I0804 10:04:37.294467 2163332 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 10:04:37.294538 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 10:04:37.294594 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.312083 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.402431 2163332 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 10:04:37.405677 2163332 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 10:04:37.405711 2163332 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 10:04:37.405722 2163332 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 10:04:37.405732 2163332 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 10:04:37.405748 2163332 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 10:04:37.405809 2163332 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 10:04:37.405901 2163332 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 10:04:37.406013 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 10:04:37.414129 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 10:04:37.436137 2163332 start.go:296] duration metric: took 141.67054ms for postStartSetup
	I0804 10:04:37.436224 2163332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 10:04:37.436265 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.453687 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.541885 2163332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 10:04:37.546057 2163332 fix.go:56] duration metric: took 4.692814355s for fixHost
	I0804 10:04:37.546084 2163332 start.go:83] releasing machines lock for "newest-cni-768931", held for 4.692869693s
	I0804 10:04:37.546159 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:37.563070 2163332 ssh_runner.go:195] Run: cat /version.json
	I0804 10:04:37.563126 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.563138 2163332 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 10:04:37.563203 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.580936 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.581156 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.740866 2163332 ssh_runner.go:195] Run: systemctl --version
	I0804 10:04:37.745223 2163332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 10:04:37.749326 2163332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 10:04:37.766095 2163332 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 10:04:37.766176 2163332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 10:04:37.773788 2163332 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 10:04:37.773820 2163332 start.go:495] detecting cgroup driver to use...
	I0804 10:04:37.773849 2163332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 10:04:37.773948 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 10:04:37.788117 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:38.201785 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 10:04:38.211955 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 10:04:38.221176 2163332 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 10:04:38.221223 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 10:04:38.230298 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 10:04:38.238908 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 10:04:38.247614 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 10:04:38.256328 2163332 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 10:04:38.264446 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 10:04:38.273173 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 10:04:38.282132 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 10:04:38.290867 2163332 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 10:04:38.298323 2163332 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 10:04:38.305902 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:38.392109 2163332 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 10:04:38.481905 2163332 start.go:495] detecting cgroup driver to use...
	I0804 10:04:38.481959 2163332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 10:04:38.482006 2163332 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 10:04:38.492886 2163332 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 10:04:38.492964 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 10:04:38.507193 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 10:04:38.524383 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:38.965725 2163332 ssh_runner.go:195] Run: which cri-dockerd
	I0804 10:04:38.969614 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 10:04:38.977908 2163332 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 10:04:38.993935 2163332 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 10:04:39.070708 2163332 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 10:04:39.151070 2163332 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 10:04:39.151179 2163332 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 10:04:39.167734 2163332 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 10:04:39.179347 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.254327 2163332 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 10:04:39.556127 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 10:04:39.566948 2163332 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0804 10:04:39.577711 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 10:04:39.587256 2163332 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 10:04:39.666843 2163332 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 10:04:39.760652 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.840823 2163332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 10:04:39.853363 2163332 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 10:04:39.863091 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.939093 2163332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 10:04:39.998099 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 10:04:40.009070 2163332 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 10:04:40.009141 2163332 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 10:04:40.012496 2163332 start.go:563] Will wait 60s for crictl version
	I0804 10:04:40.012547 2163332 ssh_runner.go:195] Run: which crictl
	I0804 10:04:40.015480 2163332 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 10:04:40.047607 2163332 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 10:04:40.047667 2163332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 10:04:40.071117 2163332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 10:04:40.096346 2163332 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 10:04:40.096430 2163332 cli_runner.go:164] Run: docker network inspect newest-cni-768931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 10:04:40.113799 2163332 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0804 10:04:40.117316 2163332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 10:04:40.128718 2163332 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0804 10:04:40.129838 2163332 kubeadm.go:875] updating cluster {Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 10:04:40.130050 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:40.510582 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:40.900777 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:41.302831 2163332 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 10:04:41.303034 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:41.705389 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:42.114511 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:42.516831 2163332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 10:04:42.537600 2163332 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 10:04:42.537629 2163332 docker.go:633] Images already preloaded, skipping extraction
	I0804 10:04:42.537693 2163332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 10:04:42.556805 2163332 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 10:04:42.556830 2163332 cache_images.go:85] Images are preloaded, skipping loading
	I0804 10:04:42.556843 2163332 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0-beta.0 docker true true} ...
	I0804 10:04:42.556981 2163332 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-768931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 10:04:42.557048 2163332 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 10:04:42.603960 2163332 cni.go:84] Creating CNI manager for ""
	I0804 10:04:42.603991 2163332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 10:04:42.604000 2163332 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0804 10:04:42.604024 2163332 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-768931 NodeName:newest-cni-768931 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 10:04:42.604182 2163332 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-768931"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 10:04:42.604258 2163332 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 10:04:42.612607 2163332 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 10:04:42.612659 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 10:04:42.620777 2163332 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0804 10:04:42.637111 2163332 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 10:04:42.652929 2163332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2300 bytes)
	I0804 10:04:42.669016 2163332 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0804 10:04:42.672189 2163332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 10:04:42.681993 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:42.752820 2163332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 10:04:42.766032 2163332 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931 for IP: 192.168.76.2
	I0804 10:04:42.766057 2163332 certs.go:194] generating shared ca certs ...
	I0804 10:04:42.766079 2163332 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:42.766266 2163332 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 10:04:42.766336 2163332 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 10:04:42.766352 2163332 certs.go:256] generating profile certs ...
	I0804 10:04:42.766461 2163332 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/client.key
	I0804 10:04:42.766532 2163332 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key.a5c16e02
	I0804 10:04:42.766586 2163332 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.key
	I0804 10:04:42.766711 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 10:04:42.766752 2163332 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 10:04:42.766766 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 10:04:42.766803 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 10:04:42.766837 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 10:04:42.766912 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 10:04:42.766983 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 10:04:42.767635 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 10:04:42.790829 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 10:04:42.814436 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 10:04:42.873985 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 10:04:42.962257 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 10:04:42.987204 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 10:04:43.010504 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 10:04:43.032579 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 10:04:43.054052 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 10:04:43.074805 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 10:04:43.095457 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 10:04:43.116289 2163332 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 10:04:43.132026 2163332 ssh_runner.go:195] Run: openssl version
	I0804 10:04:43.137020 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 10:04:43.145170 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.148316 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.148363 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.154461 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 10:04:43.162454 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 10:04:43.170868 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.174158 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.174205 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.180335 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 10:04:43.188046 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 10:04:43.196142 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.199374 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.199418 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.205534 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 10:04:43.213018 2163332 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 10:04:43.215961 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 10:04:43.221714 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 10:04:43.227380 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 10:04:43.233506 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 10:04:43.239207 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 10:04:43.245036 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 10:04:43.250834 2163332 kubeadm.go:392] StartCluster: {Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:43.250956 2163332 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 10:04:43.269121 2163332 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 10:04:43.277263 2163332 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 10:04:43.277283 2163332 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0804 10:04:43.277330 2163332 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 10:04:43.285660 2163332 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 10:04:43.286263 2163332 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-768931" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:43.286552 2163332 kubeconfig.go:62] /home/jenkins/minikube-integration/21223-1578987/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-768931" cluster setting kubeconfig missing "newest-cni-768931" context setting]
	I0804 10:04:43.286984 2163332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.288423 2163332 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 10:04:43.298821 2163332 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0804 10:04:43.298859 2163332 kubeadm.go:593] duration metric: took 21.569333ms to restartPrimaryControlPlane
	I0804 10:04:43.298870 2163332 kubeadm.go:394] duration metric: took 48.062594ms to StartCluster
	I0804 10:04:43.298890 2163332 settings.go:142] acquiring lock: {Name:mk3d97f9903fe59355ed92bb92489c9b9834574a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.298958 2163332 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:43.300110 2163332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.300900 2163332 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 10:04:43.300973 2163332 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 10:04:43.301073 2163332 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-768931"
	I0804 10:04:43.301106 2163332 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-768931"
	I0804 10:04:43.301136 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:43.301159 2163332 addons.go:69] Setting dashboard=true in profile "newest-cni-768931"
	I0804 10:04:43.301172 2163332 addons.go:238] Setting addon dashboard=true in "newest-cni-768931"
	W0804 10:04:43.301179 2163332 addons.go:247] addon dashboard should already be in state true
	I0804 10:04:43.301151 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.301204 2163332 addons.go:69] Setting default-storageclass=true in profile "newest-cni-768931"
	I0804 10:04:43.301216 2163332 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-768931"
	I0804 10:04:43.301196 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.301557 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.301866 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.302384 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.303179 2163332 out.go:177] * Verifying Kubernetes components...
	I0804 10:04:43.305197 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:43.324564 2163332 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 10:04:43.325432 2163332 addons.go:238] Setting addon default-storageclass=true in "newest-cni-768931"
	I0804 10:04:43.325477 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.325866 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.326227 2163332 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:43.326249 2163332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 10:04:43.326263 2163332 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0804 10:04:43.326303 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.330702 2163332 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	W0804 10:04:43.886614 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:04:43.332193 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0804 10:04:43.332226 2163332 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0804 10:04:43.332289 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.352412 2163332 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:43.352439 2163332 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 10:04:43.352511 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.354098 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.357876 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.376872 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.566637 2163332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 10:04:43.579924 2163332 api_server.go:52] waiting for apiserver process to appear ...
	I0804 10:04:43.580007 2163332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 10:04:43.587036 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:43.661862 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:43.763049 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0804 10:04:43.763163 2163332 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0804 10:04:43.788243 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0804 10:04:43.788319 2163332 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W0804 10:04:43.865293 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.865365 2163332 retry.go:31] will retry after 305.419917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.872538 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0804 10:04:43.872570 2163332 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0804 10:04:43.875393 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.875428 2163332 retry.go:31] will retry after 145.860796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.893731 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0804 10:04:43.893755 2163332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0804 10:04:43.974563 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0804 10:04:43.974597 2163332 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0804 10:04:44.022021 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:44.068260 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0804 10:04:44.068309 2163332 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0804 10:04:44.080910 2163332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 10:04:44.164887 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0804 10:04:44.164970 2163332 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0804 10:04:44.171091 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:44.277704 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0804 10:04:44.277741 2163332 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0804 10:04:44.368026 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:44.368071 2163332 retry.go:31] will retry after 204.750775ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:44.368122 2163332 api_server.go:72] duration metric: took 1.067187806s to wait for apiserver process to appear ...
	I0804 10:04:44.368138 2163332 api_server.go:88] waiting for apiserver healthz status ...
	I0804 10:04:44.368158 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:44.368545 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:04:44.383288 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:04:44.383317 2163332 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0804 10:04:44.480138 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:04:44.573381 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:44.869120 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:45.817807 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (21.02485888s)
	W0804 10:04:45.817865 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47830->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817882 2149628 retry.go:31] will retry after 7.331884675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47830->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817886 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (18.577242103s)
	W0804 10:04:45.817921 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47842->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817941 2149628 retry.go:31] will retry after 8.626487085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47842->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.819147 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (15.673641591s)
	W0804 10:04:45.819203 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.819221 2149628 retry.go:31] will retry after 10.775617277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:46.383837 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:04:48.883614 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:49.869344 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:49.869418 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:04:51.383255 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:53.150556 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:04:53.202901 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:53.202938 2149628 retry.go:31] will retry after 10.556999875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:53.383788 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:54.445142 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:04:54.496071 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:54.496106 2149628 retry.go:31] will retry after 19.784775984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:55.384040 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:54.871144 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:54.871202 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:56.595610 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:04:56.648210 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:56.648246 2149628 retry.go:31] will retry after 19.28607151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:57.883186 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:04:59.883484 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:59.871849 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:59.871895 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:05:02.383555 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:03.761004 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:03.814105 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:03.814138 2149628 retry.go:31] will retry after 18.372442886s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:04.883286 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:04.478042 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (20.306910761s)
	W0804 10:05:04.478091 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.478126 2163332 retry.go:31] will retry after 410.995492ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.672813 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (20.192633915s)
	W0804 10:05:04.672867 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.672888 2163332 retry.go:31] will retry after 182.584114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.703068 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (20.129638597s)
	W0804 10:05:04.703115 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.703134 2163332 retry.go:31] will retry after 523.614331ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.856484 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:04.872959 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:04.873004 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:04.889864 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:05.192954 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:37594->192.168.76.2:8443: read: connection reset by peer
	I0804 10:05:05.227229 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:05:05.369063 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:05.369560 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:05.868214 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:05.868705 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:06.201020 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.344463633s)
	W0804 10:05:06.201082 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201113 2163332 retry.go:31] will retry after 482.284125ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201118 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.311218695s)
	W0804 10:05:06.201165 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:06.201186 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201211 2163332 retry.go:31] will retry after 887.479058ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201194 2163332 retry.go:31] will retry after 435.691438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.368292 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:06.368825 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:06.637302 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:06.683768 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:06.697149 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.697200 2163332 retry.go:31] will retry after 912.303037ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:06.737524 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.737566 2163332 retry.go:31] will retry after 625.926598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.868554 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:06.869018 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:07.089442 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:07.144156 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.144195 2163332 retry.go:31] will retry after 785.129731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.364509 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:07.368843 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:07.369217 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:07.420384 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.420426 2163332 retry.go:31] will retry after 1.204230636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.610548 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:07.663536 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.663566 2163332 retry.go:31] will retry after 847.493782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:07.384053 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:07.868944 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:07.869396 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:07.929533 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:07.992350 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.992381 2163332 retry.go:31] will retry after 1.598370768s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.368829 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:08.369322 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:08.511490 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:08.563819 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.563859 2163332 retry.go:31] will retry after 2.394822068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.625020 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:08.680531 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.680572 2163332 retry.go:31] will retry after 1.418436203s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.868633 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:08.869103 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:09.368624 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:09.369142 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:09.591529 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:09.645331 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:09.645367 2163332 retry.go:31] will retry after 3.361261664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:09.868611 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:09.869088 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.099510 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:10.154439 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:10.154474 2163332 retry.go:31] will retry after 1.332951383s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:10.368786 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:10.369300 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.869015 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:10.869515 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.959750 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:11.011704 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.011736 2163332 retry.go:31] will retry after 3.283196074s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.369218 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:11.369738 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:11.487993 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:11.543582 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.543631 2163332 retry.go:31] will retry after 1.836854478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.869009 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:11.869527 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:12.369134 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:12.369608 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.284114 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:05:12.868285 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:12.868757 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:13.007033 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:13.060825 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.060859 2163332 retry.go:31] will retry after 5.419314165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.368273 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:13.368846 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:13.381071 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:13.436653 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.436740 2163332 retry.go:31] will retry after 4.903205255s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.869165 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:13.869693 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.295170 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:14.348620 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:14.348654 2163332 retry.go:31] will retry after 3.265872015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:14.368685 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:14.369071 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.868586 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:14.869001 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:15.368516 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:15.368980 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:15.868561 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:15.869023 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:16.368523 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:16.368989 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:16.868494 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:16.868945 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:17.368464 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:17.368952 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:17.615361 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:17.669075 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:17.669112 2163332 retry.go:31] will retry after 4.169004534s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:15.935132 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:17.885492 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:05:17.868530 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:17.869032 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:18.340601 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:18.368999 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:18.369438 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:18.395142 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.395177 2163332 retry.go:31] will retry after 4.503631797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.480301 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:18.532269 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.532303 2163332 retry.go:31] will retry after 6.221358918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.868632 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:18.869050 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:19.368539 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:19.369007 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:19.868600 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:19.869064 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:20.368560 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:20.369023 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:20.868636 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:20.869103 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:21.368674 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:21.369151 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:21.838756 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:21.869088 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:21.869590 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:21.892280 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:21.892309 2163332 retry.go:31] will retry after 7.287119503s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:22.368833 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:22.369350 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:22.187953 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:22.869045 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:22.869518 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:22.899745 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:22.973354 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:22.973440 2163332 retry.go:31] will retry after 5.491383729s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:23.368948 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:24.754708 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:27.887543 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:05:29.439408 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (15.15524051s)
	W0804 10:05:29.439455 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45456->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:29.439566 2149628 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45456->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:05:29.441507 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (13.506331682s)
	W0804 10:05:29.441560 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:29.441583 2149628 retry.go:31] will retry after 14.271169565s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:29.441585 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.253590877s)
	W0804 10:05:29.441617 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:29.441700 2149628 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W0804 10:05:30.383305 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:28.370244 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:28.370296 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:28.465977 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:29.179675 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:32.383952 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:34.883276 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:33.371314 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:33.371380 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:05:36.883454 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:38.883897 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:38.372462 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:38.372528 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:05:41.383199 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:43.713667 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:43.766398 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:43.766528 2149628 out.go:270] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:05:43.769126 2149628 out.go:177] * Enabled addons: 
	I0804 10:05:43.770026 2149628 addons.go:514] duration metric: took 1m58.647363457s for enable addons: enabled=[]
	W0804 10:05:43.883892 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:43.373289 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:43.373454 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:44.936710 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (20.181960154s)
	W0804 10:05:44.936754 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52098->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.936774 2163332 retry.go:31] will retry after 12.603121969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52098->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939850 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (16.473803888s)
	I0804 10:05:44.939875 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (15.760161568s)
	W0804 10:05:44.939908 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52114->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:44.939909 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939927 2163332 ssh_runner.go:235] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (1.566452819s)
	I0804 10:05:44.939927 2163332 retry.go:31] will retry after 11.974707637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52114->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939942 2163332 retry.go:31] will retry after 10.364414585s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939952 2163332 logs.go:282] 2 containers: [649f5e5c295c 059756d38779]
	I0804 10:05:44.940008 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:44.959696 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:44.959763 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:44.981336 2163332 logs.go:282] 0 containers: []
	W0804 10:05:44.981364 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:44.981422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:45.001103 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:45.001170 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:45.019261 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.019295 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:45.019341 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:45.037700 2163332 logs.go:282] 2 containers: [69f71bfef17b e3a6308944b3]
	I0804 10:05:45.037776 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:45.055759 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.055792 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:45.055847 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:45.073894 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.073922 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:45.073935 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:45.073949 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:45.129417 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:45.122097    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.122637    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124224    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124675    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.126118    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:45.122097    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.122637    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124224    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124675    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.126118    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:45.129437 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:45.129450 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:45.156907 2163332 logs.go:123] Gathering logs for kube-apiserver [059756d38779] ...
	I0804 10:05:45.156940 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059756d38779"
	W0804 10:05:45.175729 2163332 logs.go:130] failed kube-apiserver [059756d38779]: command: /bin/bash -c "docker logs --tail 400 059756d38779" /bin/bash -c "docker logs --tail 400 059756d38779": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 059756d38779
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 059756d38779
	
	** /stderr **
	I0804 10:05:45.175748 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:45.175765 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:45.195944 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:45.195970 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:45.215671 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:45.215703 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:45.256918 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:45.256951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:45.283079 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:45.283122 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:45.318677 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:45.318712 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:45.370577 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:45.370621 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:45.391591 2163332 logs.go:123] Gathering logs for kube-controller-manager [e3a6308944b3] ...
	I0804 10:05:45.391616 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a6308944b3"
	I0804 10:05:45.412276 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:45.412300 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 10:05:46.384002 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:48.883850 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:47.962390 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:47.962840 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:47.962936 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:47.981464 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:47.981534 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:47.999231 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:47.999296 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:48.017739 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.017764 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:48.017806 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:48.036069 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:48.036151 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:48.053625 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.053651 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:48.053706 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:48.072069 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:48.072161 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:48.089963 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.089985 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:48.090033 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:48.107912 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.107934 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:48.107956 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:48.107972 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:48.164032 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:48.156591    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.157104    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.158718    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.159117    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.160609    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:48.156591    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.157104    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.158718    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.159117    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.160609    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:48.164052 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:48.164068 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:48.189481 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:48.189509 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:48.223302 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:48.223340 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:48.243043 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:48.243072 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:48.279568 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:48.279605 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:48.305730 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:48.305759 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:48.326737 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:48.326763 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:48.376057 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:48.376092 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:48.397266 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:48.397297 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:50.949382 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:50.949902 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:50.950009 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:50.969779 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:50.969854 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:50.988509 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:50.988586 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:51.006536 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.006565 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:51.006613 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:51.024853 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:51.024921 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:51.042617 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.042645 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:51.042689 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:51.060511 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:51.060599 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:51.079005 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.079031 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:51.079092 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:51.096451 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.096474 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:51.096489 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:51.096500 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:51.152017 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:51.152057 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:51.202478 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:51.202527 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:51.224042 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:51.224069 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:51.244633 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:51.244664 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:51.263948 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:51.263981 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:51.300099 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:51.300130 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:51.327538 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:51.327568 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:51.383029 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:51.375959    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.376437    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.377941    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.378408    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.379910    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:51.375959    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.376437    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.377941    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.378408    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.379910    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:51.383051 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:51.383067 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:51.408284 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:51.408314 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	W0804 10:05:51.384023 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:53.883929 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:53.941653 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:53.942148 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:53.942243 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:53.961471 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:53.961551 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:53.979438 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:53.979526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:53.997538 2163332 logs.go:282] 0 containers: []
	W0804 10:05:53.997559 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:53.997604 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:54.016326 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:54.016411 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:54.033583 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.033612 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:54.033663 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:54.051020 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:54.051103 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:54.068091 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.068118 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:54.068166 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:54.085797 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.085822 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:54.085842 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:54.085855 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:54.111832 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:54.111861 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:54.137672 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:54.137701 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:54.158028 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:54.158058 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:54.212546 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:54.212579 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:54.231855 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:54.231886 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:54.282575 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:54.282614 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:54.338570 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:54.331379    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.331842    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333378    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333781    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.335263    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:54.331379    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.331842    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333378    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333781    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.335263    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:54.338591 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:54.338604 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:54.373298 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:54.373329 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:54.393825 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:54.393848 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:55.304830 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:55.358381 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:55.358414 2163332 retry.go:31] will retry after 25.619477771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.915875 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:56.931223 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:56.931695 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:56.931788 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W0804 10:05:56.971520 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.971555 2163332 retry.go:31] will retry after 22.721182959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.971565 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:56.971637 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:56.989778 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:56.989869 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:57.007294 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.007316 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:57.007359 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:57.024882 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:57.024964 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:57.042858 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.042881 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:57.042935 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:57.061232 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:57.061331 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:57.078841 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.078870 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:57.078919 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:57.096724 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.096754 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:57.096778 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:57.096790 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:57.150588 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:57.150621 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:57.176804 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:57.176833 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:57.233732 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:57.225639    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.226657    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228215    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228620    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.230079    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:57.225639    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.226657    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228215    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228620    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.230079    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:57.233755 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:57.233768 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:57.270073 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:57.270109 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:57.290426 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:57.290461 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:57.327258 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:57.327286 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:57.353115 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:57.353143 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:57.373360 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:57.373392 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:57.423101 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:57.423133 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:57.540679 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:57.593367 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:57.593411 2163332 retry.go:31] will retry after 18.437511284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:55.884024 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:58.383443 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:59.945876 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:59.946354 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:59.946446 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:59.966005 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:59.966091 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:59.985617 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:59.985701 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:00.004828 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.004855 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:00.004906 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:00.023587 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:00.023651 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:00.041659 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.041680 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:00.041727 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:00.059493 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:00.059562 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:00.076712 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.076736 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:00.076779 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:00.095203 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.095222 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:00.095237 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:00.095248 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:00.113747 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:00.113775 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:00.150407 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:00.150433 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:00.202445 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:00.202486 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:00.229719 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:00.229755 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:00.255849 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:00.255878 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:00.276091 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:00.276119 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:00.297957 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:00.297986 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:00.353933 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:00.346687    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.347273    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.348805    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.349306    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.350820    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:00.346687    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.347273    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.348805    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.349306    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.350820    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:00.353953 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:00.353968 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:00.390814 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:00.390846 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	W0804 10:06:00.883216 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:03.383100 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:05.383181 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:02.945900 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:02.946356 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:02.946453 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:02.965471 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:06:02.965535 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:02.983934 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:06:02.984001 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:03.002213 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.002237 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:03.002285 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:03.021772 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:03.021856 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:03.039529 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.039554 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:03.039612 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:03.057939 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:03.058004 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:03.076289 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.076310 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:03.076355 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:03.094117 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.094146 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:03.094167 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:03.094182 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:03.130756 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:03.130783 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:03.187120 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:03.179355    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.179917    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181530    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181944    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.183460    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:03.179355    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.179917    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181530    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181944    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.183460    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:03.187140 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:03.187153 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:03.207770 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:03.207804 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:03.244606 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:03.244642 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:03.295650 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:03.295686 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:03.351809 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:03.351844 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:03.379889 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:03.379922 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:03.406739 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:03.406767 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:03.427941 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:03.427967 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:05.948009 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:05.948483 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:05.948575 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:05.967373 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:06:05.967442 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:05.985899 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:06:05.985979 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:06.004170 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.004194 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:06.004250 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:06.022314 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:06.022386 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:06.039940 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.039963 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:06.040005 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:06.058068 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:06.058144 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:06.076569 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.076591 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:06.076631 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:06.094127 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.094153 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:06.094179 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:06.094193 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:06.119164 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:06.119195 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:06.140482 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:06.140517 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:06.190516 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:06.190551 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:06.212353 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:06.212385 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:06.248893 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:06.248919 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:06.302627 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:06.302664 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:06.329602 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:06.329633 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:06.385087 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:06.377651    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.378359    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.379718    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.380186    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.381710    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:06.377651    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.378359    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.379718    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.380186    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.381710    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:06.385113 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:06.385131 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:06.421810 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:06.421843 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:06:07.384103 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:09.883971 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:08.941210 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:06:11.884134 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:14.383873 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:13.941780 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:06:13.941906 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:13.960880 2163332 logs.go:282] 2 containers: [806e7ebaaed1 649f5e5c295c]
	I0804 10:06:13.960962 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:13.979358 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:13.979441 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:13.996946 2163332 logs.go:282] 0 containers: []
	W0804 10:06:13.996972 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:13.997025 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:14.015595 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:14.015668 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:14.034223 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.034246 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:14.034288 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:14.052124 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:14.052200 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:14.069965 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.069989 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:14.070032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:14.088436 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.088459 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:14.088473 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:14.088503 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:14.146648 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:14.146701 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:14.173008 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:14.173051 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 10:06:16.031588 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:06:16.384007 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:19.693397 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:06:20.978525 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:06:28.857368 2163332 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (14.684287631s)
	W0804 10:06:28.857442 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:24.221601    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:06:28.850442    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49502->[::1]:8443: read: connection reset by peer"
	E0804 10:06:28.851023    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.852675    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.853078    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:24.221601    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:06:28.850442    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49502->[::1]:8443: read: connection reset by peer"
	E0804 10:06:28.851023    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.852675    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.853078    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:28.857455 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:28.857466 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:28.857477 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.825848081s)
	W0804 10:06:28.857515 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49512->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:06:28.857580 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.164140796s)
	W0804 10:06:28.857620 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:06:28.857662 2163332 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49512->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W0804 10:06:28.857709 2163332 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:06:28.857875 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.879306724s)
	W0804 10:06:28.857914 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:06:28.857989 2163332 out.go:270] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:06:28.860496 2163332 out.go:177] * Enabled addons: 
	W0804 10:06:28.885498 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:06:28.861918 2163332 addons.go:514] duration metric: took 1m45.560958591s for enable addons: enabled=[]
	I0804 10:06:28.878501 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:28.878527 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:28.917388 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:28.917421 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:28.938499 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:28.938540 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:28.979902 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:28.979935 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:29.005867 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:29.005903 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	W0804 10:06:29.025877 2163332 logs.go:130] failed kube-apiserver [649f5e5c295c]: command: /bin/bash -c "docker logs --tail 400 649f5e5c295c" /bin/bash -c "docker logs --tail 400 649f5e5c295c": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 649f5e5c295c
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 649f5e5c295c
	
	** /stderr **
	I0804 10:06:29.025904 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:29.025916 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:29.076718 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:29.076759 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:31.597358 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:31.597799 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:31.597939 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:31.617008 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:31.617067 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:31.635937 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:31.636004 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:31.654450 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.654474 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:31.654531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:31.673162 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:31.673288 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:31.690681 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.690706 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:31.690759 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:31.712018 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:31.712111 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:31.729547 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.729576 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:31.729625 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:31.747479 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.747501 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:31.747513 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:31.747525 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:31.773882 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:31.773913 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:31.828620 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:31.821229    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.821688    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823253    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823731    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.825214    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:31.821229    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.821688    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823253    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823731    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.825214    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:31.828641 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:31.828655 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:31.854157 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:31.854190 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:31.873980 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:31.874004 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:31.910304 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:31.910342 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:31.931218 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:31.931246 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:31.969061 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:31.969091 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:32.019399 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:32.019436 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:32.040462 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:32.040488 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:32.059511 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:32.059540 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:34.622382 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:34.622843 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:34.622941 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:34.642832 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:34.642895 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:34.660588 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:34.660660 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:34.678855 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.678878 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:34.678922 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:34.698191 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:34.698282 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:34.716571 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.716593 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:34.716636 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:34.735252 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:34.735339 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:34.755152 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.755181 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:34.755230 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:34.773441 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.773472 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:34.773488 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:34.773500 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:34.793528 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:34.793556 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:34.812435 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:34.812465 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:34.837875 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:34.837905 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:34.858757 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:34.858786 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:34.878587 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:34.878614 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:34.916360 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:34.916391 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:34.982416 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:34.982452 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:35.039762 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:35.031976    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.032521    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034096    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034545    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.036090    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:35.031976    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.032521    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034096    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034545    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.036090    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:35.039782 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:35.039796 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:35.066299 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:35.066330 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:35.104670 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:35.104700 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:37.656360 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:37.656872 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:37.656969 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:37.675825 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:37.675894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	W0804 10:06:38.886603 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:06:37.694962 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:37.695023 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:37.712658 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.712684 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:37.712735 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:37.730728 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:37.730800 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:37.748576 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.748598 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:37.748640 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:37.767923 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:37.768007 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:37.785275 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.785298 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:37.785347 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:37.801999 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.802024 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:37.802055 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:37.802067 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:37.839050 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:37.839076 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:37.907098 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:37.907134 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:37.962875 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:37.955444    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.955922    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957526    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957895    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.959476    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:37.955444    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.955922    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957526    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957895    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.959476    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:37.962896 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:37.962916 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:37.988976 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:37.989004 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:38.011096 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:38.011124 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:38.049631 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:38.049661 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:38.102092 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:38.102126 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:38.124479 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:38.124506 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:38.144973 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:38.145000 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:38.170919 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:38.170951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:40.690387 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:40.690843 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:40.690940 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:40.710160 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:40.710230 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:40.727856 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:40.727940 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:40.745578 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.745605 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:40.745648 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:40.763453 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:40.763516 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:40.781764 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.781788 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:40.781839 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:40.799938 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:40.800013 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:40.817161 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.817187 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:40.817260 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:40.835239 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.835260 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:40.835279 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:40.835293 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:40.855149 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:40.855177 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:40.922877 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:40.922913 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:40.978296 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:40.970913    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.971466    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973009    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973412    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.974964    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:40.970913    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.971466    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973009    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973412    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.974964    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:40.978318 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:40.978339 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:41.004175 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:41.004205 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:41.025025 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:41.025053 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:41.061373 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:41.061413 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:41.087250 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:41.087278 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:41.107920 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:41.107947 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:41.148907 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:41.148937 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	W0804 10:06:41.383817 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:43.384045 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:43.699853 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:43.700314 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:43.700416 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:43.719695 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:43.719771 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:43.738313 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:43.738403 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:43.756507 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.756531 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:43.756574 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:43.775263 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:43.775363 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:43.793071 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.793109 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:43.793177 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:43.811134 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:43.811231 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:43.828955 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.828978 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:43.829038 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:43.847773 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.847793 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:43.847819 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:43.847831 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:43.873624 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:43.873653 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:43.894310 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:43.894337 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:43.945563 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:43.945599 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:43.966435 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:43.966465 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:43.984864 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:43.984889 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:44.024156 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:44.024192 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:44.060624 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:44.060652 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:44.125956 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:44.125999 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:44.152471 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:44.152508 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:44.207960 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:44.200436    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.200919    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202422    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202839    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.204356    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:44.200436    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.200919    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202422    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202839    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.204356    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:46.709332 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:46.709781 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:46.709868 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:46.729464 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:46.729567 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:46.748548 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:46.748644 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:46.766962 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.766986 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:46.767041 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:46.786525 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:46.786603 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:46.804285 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.804311 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:46.804360 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:46.822116 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:46.822209 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:46.839501 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.839530 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:46.839575 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:46.856689 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.856711 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:46.856728 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:46.856739 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:46.895336 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:46.895370 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:46.946627 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:46.946659 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:46.967302 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:46.967329 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:46.985945 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:46.985972 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:47.022376 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:47.022405 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:47.077558 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:47.069979    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.070438    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072002    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072443    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.074016    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:47.069979    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.070438    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072002    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072443    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.074016    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:47.077593 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:47.077609 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:47.097426 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:47.097453 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:47.160540 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:47.160577 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:47.186584 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:47.186612 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:06:45.883271 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:47.883345 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:49.883713 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:49.713880 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:49.714344 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:49.714431 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:49.732944 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:49.733002 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:49.751052 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:49.751129 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:49.769185 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.769207 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:49.769272 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:49.787184 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:49.787250 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:49.804791 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.804809 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:49.804849 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:49.823604 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:49.823673 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:49.840745 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.840766 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:49.840809 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:49.857681 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.857709 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:49.857729 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:49.857743 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:49.908402 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:49.908439 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:49.930280 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:49.930305 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:49.950867 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:49.950895 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:50.018519 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:50.018562 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:50.044619 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:50.044647 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:50.100753 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:50.092922    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.093459    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095094    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095578    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.097081    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:50.092922    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.093459    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095094    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095578    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.097081    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:50.100777 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:50.100793 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:50.125943 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:50.125970 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:50.146091 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:50.146117 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:50.181714 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:50.181742 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	W0804 10:06:52.383197 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:54.383379 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:52.721516 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:52.721956 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:52.722053 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:52.741758 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:52.741819 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:52.760560 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:52.760637 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:52.778049 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.778071 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:52.778133 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:52.796442 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:52.796515 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:52.813403 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.813433 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:52.813486 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:52.831370 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:52.831443 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:52.850355 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.850377 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:52.850418 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:52.868304 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.868329 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:52.868348 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:52.868362 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:52.909679 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:52.909712 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:52.959826 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:52.959860 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:52.980766 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:52.980792 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:53.000093 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:53.000123 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:53.066024 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:53.066063 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:53.122172 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:53.114825    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.115397    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.116943    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.117412    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.118938    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:53.114825    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.115397    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.116943    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.117412    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.118938    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:53.122200 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:53.122218 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:53.158613 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:53.158651 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:53.184392 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:53.184422 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:53.209845 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:53.209873 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:55.732938 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:55.733375 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:55.733476 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:55.752276 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:55.752356 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:55.770674 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:55.770750 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:55.788757 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.788778 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:55.788823 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:55.806924 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:55.806986 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:55.824084 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.824105 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:55.824163 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:55.842106 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:55.842195 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:55.859348 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.859376 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:55.859429 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:55.876943 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.876972 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:55.876990 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:55.877001 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:55.903338 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:55.903372 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:55.924802 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:55.924829 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:55.980125 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:55.972792    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.973342    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.974941    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.975429    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.976926    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:55.972792    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.973342    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.974941    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.975429    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.976926    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:55.980146 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:55.980161 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:56.000597 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:56.000622 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:56.037964 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:56.037996 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:56.088371 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:56.088407 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:56.107606 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:56.107634 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:56.143658 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:56.143689 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:56.211928 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:56.211963 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 10:06:56.383880 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:58.883846 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:58.738791 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:58.739253 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:58.739345 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:58.758672 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:58.758750 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:58.778125 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:58.778188 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:58.795601 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.795623 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:58.795675 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:58.814211 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:58.814275 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:58.831764 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.831790 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:58.831834 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:58.849466 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:58.849539 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:58.867398 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.867427 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:58.867484 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:58.885191 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.885215 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:58.885234 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:58.885262 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:58.911583 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:58.911610 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:58.950860 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:58.950893 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:59.004297 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:59.004333 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:59.025861 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:59.025889 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:59.046944 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:59.046973 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:59.085764 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:59.085794 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:59.158468 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:59.158508 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:59.184434 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:59.184462 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:59.239706 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:59.232043    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.232545    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234123    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234548    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.235973    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:59.232043    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.232545    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234123    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234548    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.235973    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:59.239735 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:59.239748 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:01.760780 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:01.761288 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:01.761386 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:01.781655 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:01.781741 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:01.799466 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:01.799533 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:01.817102 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.817126 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:01.817181 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:01.834957 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:01.835044 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:01.852872 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.852900 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:01.852951 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:01.870948 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:01.871014 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:01.890001 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.890026 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:01.890072 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:01.907730 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.907750 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:01.907767 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:01.907777 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:01.980222 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:01.980260 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:02.006847 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:02.006888 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:02.047297 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:02.047329 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:02.101227 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:02.101276 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:02.124099 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:02.124129 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:02.161273 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:02.161308 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:02.187147 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:02.187182 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:02.242852 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:02.235381    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.235858    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237451    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237924    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.239421    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:02.235381    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.235858    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237451    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237924    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.239421    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:02.242879 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:02.242893 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:02.264021 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:02.264048 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:07:01.383265 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:03.883186 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:04.785494 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:04.785952 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:04.786043 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:04.805356 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:04.805452 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:04.823966 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:04.824039 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:04.841949 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.841973 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:04.842019 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:04.859692 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:04.859761 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:04.877317 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.877341 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:04.877383 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:04.895958 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:04.896035 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:04.913348 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.913378 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:04.913426 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:04.931401 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.931427 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:04.931448 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:04.931461 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:04.951477 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:04.951507 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:05.001983 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:05.002019 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:05.023585 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:05.023619 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:05.044516 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:05.044549 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:05.113154 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:05.113195 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:05.170412 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:05.162898    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.163461    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165001    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165501    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.167026    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:05.162898    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.163461    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165001    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165501    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.167026    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:05.170434 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:05.170447 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:05.210151 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:05.210186 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:05.248755 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:05.248781 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:05.275317 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:05.275352 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:07:05.883315 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:07.884030 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:10.383933 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:07.801587 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:07.802063 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:07.802166 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:07.821137 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:07.821214 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:07.839463 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:07.839532 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:07.856871 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.856893 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:07.856938 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:07.875060 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:07.875136 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:07.896448 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.896477 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:07.896537 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:07.914334 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:07.914402 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:07.931616 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.931638 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:07.931680 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:07.950247 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.950268 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:07.950285 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:07.950295 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:07.974572 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:07.974603 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:07.994800 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:07.994827 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:08.013535 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:08.013565 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:08.048711 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:08.048738 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:08.075000 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:08.075029 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:08.095656 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:08.095681 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:08.135706 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:08.135742 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:08.189749 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:08.189780 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:08.264988 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:08.265028 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:08.321799 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:08.314236    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.314718    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316206    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316648    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.318128    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:08.314236    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.314718    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316206    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316648    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.318128    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:10.822388 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:10.822855 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:10.822962 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:10.842220 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:10.842299 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:10.860390 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:10.860467 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:10.878544 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.878567 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:10.878613 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:10.897953 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:10.898016 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:10.916393 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.916419 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:10.916474 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:10.933957 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:10.934052 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:10.951873 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.951901 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:10.951957 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:10.970046 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.970073 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:10.970101 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:10.970116 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:11.026141 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:11.018729    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.019305    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.020844    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.021228    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.022826    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:11.018729    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.019305    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.020844    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.021228    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.022826    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:11.026162 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:11.026174 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:11.052155 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:11.052183 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:11.091637 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:11.091670 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:11.142651 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:11.142684 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:11.164003 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:11.164034 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:11.200186 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:11.200214 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:11.270805 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:11.270846 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:11.297260 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:11.297295 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:11.318423 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:11.318449 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:07:12.883177 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:15.383259 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:13.838395 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:13.838840 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:13.838937 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:13.858880 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:13.858955 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:13.877417 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:13.877476 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:13.895850 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.895876 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:13.895919 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:13.914237 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:13.914304 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:13.932185 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.932214 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:13.932265 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:13.949806 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:13.949876 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:13.966753 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.966779 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:13.966837 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:13.984061 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.984080 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:13.984103 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:13.984118 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:14.024518 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:14.024551 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:14.075810 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:14.075839 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:14.096801 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:14.096835 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:14.134271 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:14.134298 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:14.210356 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:14.210398 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:14.266888 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:14.259329    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.259828    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.261517    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.262045    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.263609    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:14.259329    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.259828    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.261517    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.262045    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.263609    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:14.266911 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:14.266931 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:14.286729 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:14.286765 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:14.312819 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:14.312853 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:14.339716 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:14.339746 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:16.861870 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:16.862360 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:16.862459 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:16.882051 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:16.882134 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:16.900321 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:16.900401 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:16.917983 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.918006 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:16.918057 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:16.935570 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:16.935650 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:16.953434 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.953455 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:16.953497 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:16.971207 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:16.971281 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:16.989882 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.989911 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:16.989957 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:17.006985 2163332 logs.go:282] 0 containers: []
	W0804 10:07:17.007007 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:17.007022 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:17.007034 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:17.081700 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:17.081741 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:17.107769 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:17.107798 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:17.129048 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:17.129074 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:17.170571 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:17.170601 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:17.190971 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:17.191000 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:17.227194 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:17.227225 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:17.283198 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:17.275311    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.275794    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277411    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277858    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.279344    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:17.275311    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.275794    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277411    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277858    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.279344    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:17.283220 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:17.283236 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:17.309760 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:17.309789 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:17.358841 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:17.358871 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:07:17.383386 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:19.383988 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:19.880139 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:19.880622 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:19.880709 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:19.901098 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:19.901189 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:19.921388 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:19.921455 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:19.941720 2163332 logs.go:282] 0 containers: []
	W0804 10:07:19.941751 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:19.941808 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:19.963719 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:19.963807 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:19.982285 2163332 logs.go:282] 0 containers: []
	W0804 10:07:19.982315 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:19.982375 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:20.005165 2163332 logs.go:282] 2 containers: [db8e2ca87b17 5321aae275b7]
	I0804 10:07:20.005272 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:20.024272 2163332 logs.go:282] 0 containers: []
	W0804 10:07:20.024296 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:20.024349 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:20.066617 2163332 logs.go:282] 0 containers: []
	W0804 10:07:20.066648 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:20.066662 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:20.066674 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:21.883344 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:23.883950 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:26.383273 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:28.383629 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:30.384083 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:32.883295 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:34.883588 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:37.383240 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:39.383490 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:41.805018 2163332 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.738325489s)
	W0804 10:07:41.805054 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:30.119105    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:40.119975    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:41.799069    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:59078->[::1]:8443: read: connection reset by peer"
	E0804 10:07:41.799640    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:41.801276    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:30.119105    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:40.119975    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:41.799069    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:59078->[::1]:8443: read: connection reset by peer"
	E0804 10:07:41.799640    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:41.801276    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:41.805062 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:41.805073 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	W0804 10:07:41.824568 2163332 logs.go:130] failed etcd [62ad65a28324]: command: /bin/bash -c "docker logs --tail 400 62ad65a28324" /bin/bash -c "docker logs --tail 400 62ad65a28324": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 62ad65a28324
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 62ad65a28324
	
	** /stderr **
	I0804 10:07:41.824590 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:41.824606 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:41.866655 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:41.866687 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:41.918542 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:41.918580 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:41.940196 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:41.940228 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:41.980124 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:41.980151 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:07:41.999188 2163332 logs.go:130] failed kube-apiserver [806e7ebaaed1]: command: /bin/bash -c "docker logs --tail 400 806e7ebaaed1" /bin/bash -c "docker logs --tail 400 806e7ebaaed1": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 806e7ebaaed1
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 806e7ebaaed1
	
	** /stderr **
	I0804 10:07:41.999208 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:41.999222 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:42.021383 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:42.021413 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	W0804 10:07:42.040097 2163332 logs.go:130] failed kube-controller-manager [5321aae275b7]: command: /bin/bash -c "docker logs --tail 400 5321aae275b7" /bin/bash -c "docker logs --tail 400 5321aae275b7": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 5321aae275b7
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 5321aae275b7
	
	** /stderr **
	I0804 10:07:42.040121 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:42.040140 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:42.121467 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:42.121517 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 10:07:41.384132 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:43.883489 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:44.649035 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:44.649550 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:44.649655 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:44.668446 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:44.668531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:44.686095 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:44.686171 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:44.705643 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.705669 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:44.705736 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:44.724574 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:44.724643 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:44.743534 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.743556 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:44.743599 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:44.762338 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:44.762422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:44.782440 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.782464 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:44.782511 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:44.800457 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.800482 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:44.800503 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:44.800519 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:44.828987 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:44.829024 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:44.851349 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:44.851380 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:44.891887 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:44.891921 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:44.942771 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:44.942809 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:44.963910 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:44.963936 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:44.982991 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:44.983018 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:45.019697 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:45.019724 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:45.098143 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:45.098181 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:45.156899 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:45.149340    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.149889    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151529    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151954    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.153458    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:45.149340    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.149889    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151529    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151954    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.153458    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:45.156923 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:45.156936 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:47.685272 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:47.685730 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:47.685821 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W0804 10:07:45.884049 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:48.383460 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:50.384087 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:47.705698 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:47.705776 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:47.723486 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:47.723559 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:47.740254 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.740277 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:47.740328 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:47.758844 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:47.758912 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:47.776147 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.776169 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:47.776209 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:47.794049 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:47.794120 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:47.810872 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.810892 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:47.810933 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:47.828618 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.828639 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:47.828655 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:47.828665 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:47.884561 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:47.876612    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.877177    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.878713    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.879149    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.880641    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:47.876612    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.877177    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.878713    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.879149    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.880641    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:47.884591 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:47.884608 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:47.910602 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:47.910632 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:47.931635 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:47.931662 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:47.974664 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:47.974698 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:48.026673 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:48.026707 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:48.047596 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:48.047624 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:48.084322 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:48.084354 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:48.162716 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:48.162754 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:48.189072 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:48.189103 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:50.709307 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:50.709704 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:50.709797 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:50.728631 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:50.728711 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:50.747056 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:50.747128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:50.764837 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.764861 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:50.764907 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:50.783351 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:50.783422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:50.801048 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.801068 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:50.801112 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:50.819524 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:50.819605 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:50.837558 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.837583 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:50.837635 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:50.855272 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.855300 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:50.855315 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:50.855334 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:50.875612 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:50.875640 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:50.895850 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:50.895876 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:50.976003 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:50.976045 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:51.002688 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:51.002724 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:51.045612 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:51.045644 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:51.098299 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:51.098331 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:51.135309 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:51.135342 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:51.191580 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:51.183846    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.184481    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186082    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186483    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.188015    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:51.183846    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.184481    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186082    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186483    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.188015    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:51.191601 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:51.191615 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:51.218895 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:51.218923 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	W0804 10:07:52.883308 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:54.883712 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:53.739326 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:53.739815 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:53.739915 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:53.760078 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:53.760152 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:53.778771 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:53.778848 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:53.796996 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.797026 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:53.797075 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:53.815962 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:53.816032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:53.833919 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.833942 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:53.833991 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:53.852829 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:53.852894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:53.870544 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.870572 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:53.870620 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:53.888900 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.888923 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:53.888941 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:53.888954 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:53.909456 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:53.909482 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:53.959416 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:53.959451 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:53.979376 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:53.979406 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:54.015365 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:54.015393 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:54.092580 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:54.092627 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:54.119325 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:54.119436 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:54.178242 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:54.170338    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.171010    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172560    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172976    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.174509    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:54.170338    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.171010    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172560    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172976    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.174509    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:54.178266 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:54.178288 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:54.205571 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:54.205602 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:54.226781 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:54.226811 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:56.772513 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:56.773019 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:56.773137 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:56.792596 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:56.792666 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:56.810823 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:56.810896 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:56.828450 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.828480 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:56.828532 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:56.847167 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:56.847237 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:56.866291 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.866315 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:56.866358 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:56.884828 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:56.884907 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:56.905059 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.905088 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:56.905134 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:56.923381 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.923417 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:56.923435 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:56.923447 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:56.943931 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:56.943957 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:56.986803 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:56.986835 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:57.013326 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:57.013360 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:57.068200 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:57.060866    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.061398    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.062981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.063498    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.064981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:57.060866    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.061398    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.062981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.063498    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.064981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:57.068220 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:57.068232 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:57.093915 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:57.093943 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:57.144935 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:57.144969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:57.166788 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:57.166813 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:57.188225 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:57.188254 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:57.224405 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:57.224433 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 10:07:56.883778 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:59.383176 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:59.805597 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:59.806058 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:59.806152 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:59.824866 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:59.824944 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:59.843663 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:59.843753 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:59.861286 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.861306 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:59.861356 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:59.880494 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:59.880573 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:59.898827 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.898851 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:59.898894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:59.917517 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:59.917584 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:59.935879 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.935906 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:59.935963 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:59.954233 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.954264 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:59.954284 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:59.954302 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:59.980238 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:59.980271 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:00.037175 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:00.029528    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.030067    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.031620    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.032023    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.033553    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:00.029528    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.030067    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.031620    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.032023    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.033553    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:00.037200 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:00.037215 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:00.079854 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:00.079889 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:00.117813 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:00.117842 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:00.199625 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:00.199671 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:00.225938 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:00.225969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:00.246825 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:00.246857 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:00.300311 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:00.300362 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:00.322075 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:00.322105 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:08:01.383269 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:02.842602 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:02.843031 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:02.843128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:02.862419 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:02.862503 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:02.881322 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:02.881409 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:02.902962 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.902986 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:02.903039 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:02.922238 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:02.922315 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:02.940312 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.940340 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:02.940391 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:02.960494 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:02.960580 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:02.978877 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.978915 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:02.978977 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:02.996894 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.996918 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:02.996937 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:02.996951 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:03.060369 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:03.060412 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:03.100294 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:03.100320 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:03.128232 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:03.128269 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:03.149215 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:03.149276 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:03.168809 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:03.168839 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:03.244969 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:03.245019 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:03.302519 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:03.294536    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.295054    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.296664    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.297129    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.298652    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:03.294536    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.295054    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.296664    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.297129    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.298652    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:03.302541 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:03.302555 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:03.328592 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:03.328621 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:03.349409 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:03.349436 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:05.892519 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:05.892926 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:05.893018 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:05.912863 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:05.912930 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:05.931765 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:05.931842 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:05.949624 2163332 logs.go:282] 0 containers: []
	W0804 10:08:05.949651 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:05.949706 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:05.969017 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:05.969096 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:05.987253 2163332 logs.go:282] 0 containers: []
	W0804 10:08:05.987279 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:05.987338 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:06.006096 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:06.006174 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:06.023866 2163332 logs.go:282] 0 containers: []
	W0804 10:08:06.023898 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:06.023955 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:06.041554 2163332 logs.go:282] 0 containers: []
	W0804 10:08:06.041574 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:06.041592 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:06.041603 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:06.078088 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:06.078114 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:06.160862 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:06.160907 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:06.187395 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:06.187425 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:06.243359 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:06.235931    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.236430    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.237921    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.238444    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.239969    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:06.235931    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.236430    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.237921    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.238444    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.239969    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:06.243387 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:06.243404 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:06.269689 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:06.269719 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:06.290404 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:06.290435 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:06.310595 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:06.310619 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:06.330304 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:06.330331 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:06.372930 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:06.372969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:08.923937 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:08.924354 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:08.924450 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:08.943688 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:08.943758 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:08.963008 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:08.963079 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:08.981372 2163332 logs.go:282] 0 containers: []
	W0804 10:08:08.981400 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:08.981453 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:08.999509 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:08.999592 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:09.017857 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.017881 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:09.017930 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:09.036581 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:09.036643 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:09.054584 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.054613 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:09.054666 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:09.072888 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.072924 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:09.072949 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:09.072965 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:09.149606 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:09.149645 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:09.178148 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:09.178185 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:09.222507 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:09.222544 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:09.275195 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:09.275235 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:09.299125 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:09.299159 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:09.319703 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:09.319747 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:09.346880 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:09.346922 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:09.404327 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:09.396630    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.397126    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.398704    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.399191    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.400813    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:09.396630    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.397126    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.398704    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.399191    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.400813    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:09.404352 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:09.404367 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:09.425425 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:09.425452 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:11.963472 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:11.963939 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:11.964032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:11.983012 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:11.983080 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:12.001567 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:12.001629 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:12.019335 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.019361 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:12.019428 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:12.038818 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:12.038893 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:12.056951 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.056978 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:12.057022 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:12.075232 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:12.075305 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:12.092737 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.092758 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:12.092800 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:12.109994 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.110024 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:12.110044 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:12.110055 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:12.166801 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:12.158687   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.159257   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.160910   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.161382   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.162961   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:12.158687   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.159257   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.160910   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.161382   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.162961   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:12.166825 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:12.166842 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:12.192505 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:12.192533 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:12.213260 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:12.213294 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:12.234230 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:12.234264 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:12.254032 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:12.254068 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:12.336496 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:12.336538 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:12.362829 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:12.362860 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:12.404783 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:12.404822 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:12.456932 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:12.456963 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:08:12.885483 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:08:14.998006 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:14.998459 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:14.998558 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:15.018639 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:15.018726 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:15.037594 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:15.037664 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:15.055647 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.055675 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:15.055720 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:15.073464 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:15.073538 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:15.091563 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.091588 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:15.091636 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:15.110381 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:15.110457 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:15.128744 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.128766 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:15.128811 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:15.147315 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.147336 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:15.147350 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:15.147369 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:15.167872 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:15.167908 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:15.211657 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:15.211690 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:15.233001 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:15.233026 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:15.252541 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:15.252580 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:15.291017 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:15.291044 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:15.316967 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:15.317004 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:15.343514 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:15.343543 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:15.394164 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:15.394201 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:15.475808 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:15.475847 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:15.532790 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:15.525410   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.525962   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527526   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527890   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.529344   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:15.525410   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.525962   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527526   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527890   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.529344   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:18.033614 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:18.034099 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:18.034190 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:18.053426 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:18.053519 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:18.072396 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:18.072461 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:18.090428 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.090453 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:18.090519 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:18.109580 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:18.109661 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:18.127869 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.127899 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:18.127954 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:18.146622 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:18.146695 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:18.165973 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.165995 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:18.166038 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:18.183152 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.183175 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:18.183190 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:18.183204 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:18.239841 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:18.232099   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.232612   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234166   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234591   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.236113   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:18.232099   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.232612   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234166   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234591   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.236113   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:18.239862 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:18.239874 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:18.260920 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:18.260946 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:18.304135 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:18.304170 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:18.356641 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:18.356679 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:18.376311 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:18.376341 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:18.460920 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:18.460965 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:18.488725 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:18.488755 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:18.509858 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:18.509894 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:18.546219 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:18.546248 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:21.073317 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:21.073860 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:21.073971 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:21.093222 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:21.093346 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:21.111951 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:21.112042 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:21.130287 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.130308 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:21.130359 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:21.148384 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:21.148471 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:21.166576 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.166604 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:21.166652 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:21.185348 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:21.185427 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:21.203596 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.203622 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:21.203681 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:21.221592 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.221620 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:21.221640 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:21.221652 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:21.277441 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:21.269692   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.270305   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.271725   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.272213   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.273739   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:21.269692   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.270305   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.271725   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.272213   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.273739   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:21.277466 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:21.277482 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:21.298481 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:21.298511 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:21.350381 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:21.350418 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:21.371474 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:21.371501 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:21.408284 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:21.408313 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:21.485994 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:21.486031 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:21.512310 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:21.512339 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:21.539196 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:21.539228 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:21.581887 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:21.581920 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:08:22.886436 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	W0804 10:08:25.383211 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:24.102885 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:24.103356 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:24.103454 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:24.123078 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:24.123144 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:24.141483 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:24.141545 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:24.159538 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.159565 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:24.159610 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:24.177499 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:24.177574 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:24.195218 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.195246 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:24.195289 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:24.213410 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:24.213501 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:24.231595 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.231619 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:24.231675 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:24.250451 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.250478 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:24.250497 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:24.250511 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:24.269653 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:24.269681 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:24.348982 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:24.349027 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:24.405452 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:24.397972   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.398529   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400132   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400600   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.402109   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:24.397972   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.398529   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400132   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400600   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.402109   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:24.405476 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:24.405491 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:24.431565 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:24.431593 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:24.469920 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:24.469948 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:24.495911 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:24.495942 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:24.516767 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:24.516796 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:24.559809 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:24.559846 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:24.612215 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:24.612251 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:27.134399 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:27.134902 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:27.135016 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:27.154460 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:27.154526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:27.172467 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:27.172537 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:27.190547 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.190571 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:27.190626 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:27.208406 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:27.208478 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:27.226270 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.226293 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:27.226347 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:27.244648 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:27.244710 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:27.262363 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.262384 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:27.262429 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:27.280761 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.280791 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:27.280811 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:27.280828 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:27.337516 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:27.329752   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.330367   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.331865   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.332331   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.333862   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:27.329752   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.330367   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.331865   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.332331   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.333862   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:27.337538 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:27.337554 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:27.383205 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:27.383237 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:27.402831 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:27.402863 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:27.439987 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:27.440016 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:27.467188 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:27.467220 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:27.488626 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:27.488651 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:27.538307 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:27.538341 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:27.558848 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:27.558875 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:27.640317 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:27.640360 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 10:08:27.383261 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:29.883318 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:30.169015 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:30.169492 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:30.169591 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:30.188919 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:30.189000 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:30.208903 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:30.208986 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:30.226974 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.227006 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:30.227061 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:30.245555 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:30.245625 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:30.263987 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.264013 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:30.264059 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:30.282944 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:30.283023 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:30.301744 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.301773 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:30.301834 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:30.320893 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.320919 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:30.320936 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:30.320951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:30.397888 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:30.397925 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:30.418812 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:30.418837 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:30.464089 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:30.464123 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:30.484745 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:30.484778 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:30.504805 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:30.504837 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:30.530475 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:30.530511 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:30.586445 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:30.578622   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.579233   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.580788   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.581197   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.582760   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:30.578622   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.579233   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.580788   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.581197   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.582760   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:30.586465 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:30.586478 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:30.613024 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:30.613054 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:30.666024 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:30.666060 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:08:31.883721 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:34.383160 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:33.203579 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:33.204060 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:33.204180 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:33.223272 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:33.223341 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:33.242111 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:33.242191 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:33.260564 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.260587 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:33.260632 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:33.279120 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:33.279198 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:33.297558 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.297581 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:33.297626 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:33.315911 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:33.315987 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:33.334504 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.334534 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:33.334594 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:33.352831 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.352855 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:33.352876 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:33.352891 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:33.431146 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:33.431188 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:33.457483 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:33.457516 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:33.512587 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:33.505280   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.505794   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507387   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507829   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.509409   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:33.505280   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.505794   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507387   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507829   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.509409   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:33.512614 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:33.512630 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:33.563154 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:33.563186 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:33.584703 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:33.584730 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:33.603831 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:33.603862 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:33.641549 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:33.641579 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:33.667027 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:33.667056 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:33.688258 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:33.688291 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:36.234388 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:36.234842 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:36.234932 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:36.253452 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:36.253531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:36.272517 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:36.272578 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:36.290793 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.290815 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:36.290859 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:36.309868 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:36.309951 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:36.328038 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.328065 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:36.328128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:36.346447 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:36.346526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:36.364698 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.364720 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:36.364774 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:36.382618 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.382649 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:36.382672 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:36.382687 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:36.460757 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:36.460795 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:36.517181 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:36.509281   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.509826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511400   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.513375   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:36.509281   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.509826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511400   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.513375   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:36.517202 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:36.517218 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:36.570857 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:36.570896 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:36.590896 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:36.590929 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:36.616290 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:36.616323 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:36.643271 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:36.643298 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:36.663678 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:36.663704 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:36.708665 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:36.708695 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:36.729524 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:36.729551 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:08:36.383928 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:38.883516 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:39.267469 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:39.267990 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:39.268120 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:39.287780 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:39.287877 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:39.307153 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:39.307248 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:39.326719 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.326752 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:39.326810 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:39.345319 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:39.345387 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:39.363424 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.363455 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:39.363511 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:39.381746 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:39.381825 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:39.399785 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.399809 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:39.399862 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:39.419064 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.419095 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:39.419121 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:39.419136 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:39.501950 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:39.501998 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:39.528491 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:39.528525 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:39.585466 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:39.578061   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.578577   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580045   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580462   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.581949   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:39.578061   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.578577   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580045   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580462   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.581949   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:39.585497 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:39.585518 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:39.611559 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:39.611590 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:39.632402 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:39.632438 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:39.677721 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:39.677758 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:39.728453 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:39.728487 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:39.752029 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:39.752060 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:39.772376 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:39.772408 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:42.311175 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:42.311726 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:42.311836 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:42.331694 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:42.331761 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:42.350128 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:42.350202 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:42.368335 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.368358 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:42.368411 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:42.385942 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:42.386020 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:42.403768 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.403788 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:42.403840 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:42.422612 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:42.422679 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:42.439585 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.439609 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:42.439659 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:42.457208 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.457229 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:42.457263 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:42.457279 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:42.535545 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:42.535578 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:42.561612 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:42.561641 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:42.616811 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:42.609048   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.609673   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611215   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611642   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.613094   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:42.609048   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.609673   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611215   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611642   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.613094   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:42.616832 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:42.616847 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:42.643211 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:42.643240 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:42.663882 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:42.663910 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:42.683025 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:42.683052 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:42.722746 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:42.722772 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:42.743550 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:42.743589 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:42.788986 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:42.789023 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:45.340596 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:45.341080 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:45.343076 2163332 out.go:201] 
	W0804 10:08:45.344232 2163332 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0804 10:08:45.344248 2163332 out.go:270] * 
	W0804 10:08:45.346020 2163332 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 10:08:45.347852 2163332 out.go:201] 
	W0804 10:08:40.883920 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:42.884060 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:45.384074 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:47.883235 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:50.383116 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:52.383162 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:54.383410 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:56.383810 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:58.883290 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:00.883650 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:03.383190 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:05.383617 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:07.384051 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:09.883346 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:11.883783 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:13.884208 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:16.383435 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:18.383891 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:20.883429 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:22.884027 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:25.383556 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:27.883164 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:29.883548 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:31.883955 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:34.383514 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:36.883247 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:38.883512 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:40.884109 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:43.383400 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:09:45.383376 2149628 node_ready.go:38] duration metric: took 6m0.000813638s for node "no-preload-499486" to be "Ready" ...
	I0804 10:09:45.385759 2149628 out.go:201] 
	W0804 10:09:45.386973 2149628 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W0804 10:09:45.386995 2149628 out.go:270] * 
	W0804 10:09:45.389624 2149628 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 10:09:45.390891 2149628 out.go:201] 
	
	
	==> Docker <==
	Aug 04 10:03:45 no-preload-499486 cri-dockerd[1365]: time="2025-08-04T10:03:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8e25ebb8a89d445633ee72689dd9126eae7afe58d9a207dbe2cdc5da1c82e7c5/resolv.conf as [nameserver 192.168.94.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 10:03:45 no-preload-499486 cri-dockerd[1365]: time="2025-08-04T10:03:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c8e64888584066fdfe6acecc56b1467a84c162997e4f0b1a939859400ab4a5f/resolv.conf as [nameserver 192.168.94.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 10:03:45 no-preload-499486 cri-dockerd[1365]: time="2025-08-04T10:03:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/26755274d895161ffe5b3f341bb7944f31daecb44dda61932240318d73b09b9c/resolv.conf as [nameserver 192.168.94.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Aug 04 10:03:45 no-preload-499486 cri-dockerd[1365]: time="2025-08-04T10:03:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/faaa3a488dc04608657ace902b23aff9e53e1d14755fdf70c32d9c4a86ae6ec6/resolv.conf as [nameserver 192.168.94.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 10:03:45 no-preload-499486 dockerd[1060]: time="2025-08-04T10:03:45.970130751Z" level=info msg="ignoring event" container=fc533eec1834b08c163742338f45821b5f02c6c5578ebe0fa5487906728547c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:04:07 no-preload-499486 dockerd[1060]: time="2025-08-04T10:04:07.509440559Z" level=info msg="ignoring event" container=835331562e21d7f94c792e7e547dd630d261e361d3dbf1c95186b90631d45ab4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:04:08 no-preload-499486 dockerd[1060]: time="2025-08-04T10:04:08.536777903Z" level=info msg="ignoring event" container=6c7c3e8e5a5a316e53d6dfbe663ac4dca13a60be5ece3da5dc2247e32f82d17a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:04:08 no-preload-499486 dockerd[1060]: time="2025-08-04T10:04:08.805380544Z" level=info msg="ignoring event" container=465ed5c63105c622faf628dc45dffc004b55d09148a84a0c45ec2f8a27c97fbf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:04:39 no-preload-499486 dockerd[1060]: time="2025-08-04T10:04:39.818927796Z" level=info msg="ignoring event" container=0595640f46489eb8407e6e761b084aaf6097c9c319d96bc72e2a6da471c5d644 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:04:44 no-preload-499486 dockerd[1060]: time="2025-08-04T10:04:44.826174830Z" level=info msg="ignoring event" container=c53148ebe39d8e04e877760553c72fbbb0efca7dc09fc1550c0d193752988ad5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:04:46 no-preload-499486 dockerd[1060]: time="2025-08-04T10:04:46.743926255Z" level=info msg="ignoring event" container=c90ac788092b4d99962cf322dca6016fcbab4b4a8a55f82e1817c83b0f7d9215 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:28 no-preload-499486 dockerd[1060]: time="2025-08-04T10:05:28.445977627Z" level=info msg="ignoring event" container=624b9721d7e89385a14cf7a113afd2059fd713021c967546422f8d3e449b1c07 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:33 no-preload-499486 dockerd[1060]: time="2025-08-04T10:05:33.808565031Z" level=info msg="ignoring event" container=86926cfa626f66ab359d1d7b13dfaa8c7749178320dbff42dccd2306e7130172 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:39 no-preload-499486 dockerd[1060]: time="2025-08-04T10:05:39.468564300Z" level=info msg="ignoring event" container=7c4f93cb4bfbd43195edf99e929820bd4cd2ff17c1c7e1820fc35244264f90eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:06:39 no-preload-499486 dockerd[1060]: time="2025-08-04T10:06:39.443989985Z" level=info msg="ignoring event" container=b0de8a87430e54e04bae9e0fe793e3fda728c66cafdbbb857dfa8b70b7b849a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:06:41 no-preload-499486 dockerd[1060]: time="2025-08-04T10:06:41.920345198Z" level=info msg="ignoring event" container=95273882a0ba3beeec00a1ee16fc2e13f9dc7d28771bbf35eeed20bc1e617760 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:06:56 no-preload-499486 dockerd[1060]: time="2025-08-04T10:06:56.807457292Z" level=info msg="ignoring event" container=9ce95901ec688dadabbfeba65d8a96e0cd422aa6483ce4093631e0769ecec314 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:08:23 no-preload-499486 dockerd[1060]: time="2025-08-04T10:08:23.128503844Z" level=info msg="ignoring event" container=152aef9e02ab4ddae450a3b16f379f3b222a44743fca7913d5d483269f9dfc2b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:08:31 no-preload-499486 dockerd[1060]: time="2025-08-04T10:08:31.608495511Z" level=info msg="ignoring event" container=8fb3f2292ab14a56a1592fff79c30568329e27afc3d74f06f288f788a6b3c3a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:09:46 no-preload-499486 dockerd[1060]: time="2025-08-04T10:09:46.825802914Z" level=info msg="ignoring event" container=a810f701be18750d51044ccf9d9ff7fef305f901df6922bfca0f6a234ed1aa24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:11:34 no-preload-499486 dockerd[1060]: time="2025-08-04T10:11:34.510840058Z" level=info msg="ignoring event" container=472dcd03fe966df29d93b8c639b463faef262c9b90416aac5e23792b181bb14f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:11:36 no-preload-499486 dockerd[1060]: time="2025-08-04T10:11:36.385566691Z" level=info msg="ignoring event" container=823f70262bea3d5c7f4b24113caf89653caced8307fea734bda4d7fd9ee05224 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:14:49 no-preload-499486 dockerd[1060]: time="2025-08-04T10:14:49.818859868Z" level=info msg="ignoring event" container=170e383b72244e90a4b5a27759222438dfdb8d4a28ad9820bdb56232fd5d66e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:16:58 no-preload-499486 dockerd[1060]: time="2025-08-04T10:16:58.167680349Z" level=info msg="ignoring event" container=6ea0a675973d81dde80ae3a00c3d70b3770278bb2eb3abbd26498cec2d3752d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:17:04 no-preload-499486 dockerd[1060]: time="2025-08-04T10:17:04.676044247Z" level=info msg="ignoring event" container=80aef0e1e41b81cb0f8b058ed3f2dccceb3285abc8cabc20f2603666b99f4941 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	80aef0e1e41b8       9ad783615e1bc       2 minutes ago       Exited              kube-controller-manager   12                  faaa3a488dc04       kube-controller-manager-no-preload-499486
	6ea0a675973d8       d85eea91cc41d       2 minutes ago       Exited              kube-apiserver            12                  26755274d8951       kube-apiserver-no-preload-499486
	170e383b72244       1e30c0b1e9b99       3 minutes ago       Exited              etcd                      12                  8e25ebb8a89d4       etcd-no-preload-499486
	f9db373fc015a       21d34a2aeacf5       15 minutes ago      Running             kube-scheduler            1                   5c8e648885840       kube-scheduler-no-preload-499486
	2a1c20b2ffee8       21d34a2aeacf5       20 minutes ago      Exited              kube-scheduler            0                   d2b1bfd452832       kube-scheduler-no-preload-499486
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:18:48.708580    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:18:48.709101    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:18:48.710615    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:18:48.711001    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:18:48.712526    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.003976] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-30ac57a033af
	[  +0.000006] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +3.807738] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000008] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.000000] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.251962] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-30ac57a033af
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-30ac57a033af
	[  +0.000007] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.000000] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +7.935446] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000007] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000034] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.003972] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000005] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[ +23.237968] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 e9 0e 42 0b 64 08 06
	[  +0.000446] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 d5 e2 93 f6 db 08 06
	[Aug 4 10:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da a7 c8 ad 52 b3 08 06
	[  +0.000606] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff da d5 10 fe 4e 73 08 06
	
	
	==> etcd [170e383b7224] <==
	flag provided but not defined: -proxy-refresh-interval
	Usage:
	
	  etcd [flags]
	    Start an etcd server.
	
	  etcd --version
	    Show the version of etcd.
	
	  etcd -h | --help
	    Show the help information about etcd.
	
	  etcd --config-file
	    Path to the server configuration file. Note that if a configuration file is provided, other command line flags and environment variables will be ignored.
	
	  etcd gateway
	    Run the stateless pass-through etcd TCP connection forwarding proxy.
	
	  etcd grpc-proxy
	    Run the stateless etcd v3 gRPC L7 reverse proxy.
	
	
	
	==> kernel <==
	 10:18:48 up 1 day, 19:00,  0 users,  load average: 0.08, 0.22, 0.91
	Linux no-preload-499486 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [6ea0a675973d] <==
	W0804 10:16:38.135801       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:16:38.135759       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 10:16:38.136738       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 10:16:38.143395       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 10:16:38.150102       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 10:16:38.150123       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 10:16:38.150333       1 instance.go:232] Using reconciler: lease
	W0804 10:16:38.151092       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:16:38.151092       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:16:39.137033       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:16:39.137037       1 logging.go:55] [core] [Channel #1 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:16:39.151630       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:16:40.629308       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:16:40.708474       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:16:40.920108       1 logging.go:55] [core] [Channel #1 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:16:43.554034       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:16:43.561394       1 logging.go:55] [core] [Channel #1 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:16:43.709826       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:16:48.012003       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:16:48.034347       1 logging.go:55] [core] [Channel #1 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:16:48.378111       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:16:55.732902       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:16:55.811701       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:16:55.882681       1 logging.go:55] [core] [Channel #1 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 10:16:58.151651       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [80aef0e1e41b] <==
	I0804 10:16:44.086069       1 serving.go:386] Generated self-signed cert in-memory
	I0804 10:16:44.638963       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 10:16:44.638987       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 10:16:44.640337       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 10:16:44.640362       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 10:16:44.640730       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 10:16:44.640760       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 10:17:04.642443       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.94.2:8443/healthz\": dial tcp 192.168.94.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [2a1c20b2ffee] <==
	E0804 10:02:28.175749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:02:31.304161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.94.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:02:32.791509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:02:34.007548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.94.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 10:02:40.294146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.94.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:02:43.128115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.94.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 10:02:45.421355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 10:02:50.083757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.94.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 10:02:51.361497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.94.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 10:03:05.497126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.94.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 10:03:08.537516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.94.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 10:03:11.097373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 10:03:11.729593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.94.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 10:03:12.801646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.94.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:03:17.035915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:03:18.849345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.94.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 10:03:23.883368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.94.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 10:03:24.360764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 10:03:24.447406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:03:25.585024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.94.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:03:26.613910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.94.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 10:03:28.018647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.94.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:03:28.621818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.94.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 10:03:34.452113       1 server.go:274] "handlers are not fully synchronized" err="context canceled"
	E0804 10:03:34.452246       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f9db373fc015] <==
	E0804 10:17:35.685923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:17:38.961891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.94.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 10:17:40.036902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 10:17:42.826449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 10:17:43.275080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.94.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 10:17:47.352735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:17:56.194621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.94.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 10:18:01.000920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.94.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:18:03.342385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.94.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:18:07.787680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.94.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 10:18:10.101000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 10:18:10.448694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.94.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:18:13.466485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.94.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 10:18:14.020990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.94.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 10:18:15.390361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:18:19.040765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.94.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 10:18:20.371502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 10:18:21.540547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.94.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 10:18:25.585340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.94.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 10:18:27.273825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.94.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 10:18:31.771744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:18:35.082646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.94.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 10:18:36.337073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.94.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:18:37.460556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 10:18:41.940195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.94.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	
	
	==> kubelet <==
	Aug 04 10:18:30 no-preload-499486 kubelet[1550]: I0804 10:18:30.686086    1550 scope.go:117] "RemoveContainer" containerID="6ea0a675973d81dde80ae3a00c3d70b3770278bb2eb3abbd26498cec2d3752d3"
	Aug 04 10:18:30 no-preload-499486 kubelet[1550]: E0804 10:18:30.686285    1550 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-no-preload-499486_kube-system(f4c9aec0fc04dec0ce14ce1fda478878)\"" pod="kube-system/kube-apiserver-no-preload-499486" podUID="f4c9aec0fc04dec0ce14ce1fda478878"
	Aug 04 10:18:31 no-preload-499486 kubelet[1550]: I0804 10:18:31.282078    1550 kubelet_node_status.go:75] "Attempting to register node" node="no-preload-499486"
	Aug 04 10:18:31 no-preload-499486 kubelet[1550]: E0804 10:18:31.282507    1550 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.94.2:8443/api/v1/nodes\": dial tcp 192.168.94.2:8443: connect: connection refused" node="no-preload-499486"
	Aug 04 10:18:31 no-preload-499486 kubelet[1550]: E0804 10:18:31.779886    1550 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.94.2:8443/api/v1/namespaces/default/events/no-preload-499486.1858883701518ab4\": dial tcp 192.168.94.2:8443: connect: connection refused" event="&Event{ObjectMeta:{no-preload-499486.1858883701518ab4  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:no-preload-499486,UID:no-preload-499486,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node no-preload-499486 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:no-preload-499486,},FirstTimestamp:2025-08-04 10:03:44.687508148 +0000 UTC m=+0.105538214,LastTimestamp:2025-08-04 10:03:44.784778199 +0000 UTC m=+0.202808267,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:no-preload-499486,}"
	Aug 04 10:18:32 no-preload-499486 kubelet[1550]: E0804 10:18:32.205813    1550 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.94.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/no-preload-499486?timeout=10s\": dial tcp 192.168.94.2:8443: connect: connection refused" interval="7s"
	Aug 04 10:18:33 no-preload-499486 kubelet[1550]: E0804 10:18:33.685568    1550 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.94.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Aug 04 10:18:34 no-preload-499486 kubelet[1550]: E0804 10:18:34.759334    1550 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"no-preload-499486\" not found"
	Aug 04 10:18:38 no-preload-499486 kubelet[1550]: I0804 10:18:38.283755    1550 kubelet_node_status.go:75] "Attempting to register node" node="no-preload-499486"
	Aug 04 10:18:38 no-preload-499486 kubelet[1550]: E0804 10:18:38.284116    1550 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.94.2:8443/api/v1/nodes\": dial tcp 192.168.94.2:8443: connect: connection refused" node="no-preload-499486"
	Aug 04 10:18:38 no-preload-499486 kubelet[1550]: E0804 10:18:38.685690    1550 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"no-preload-499486\" not found" node="no-preload-499486"
	Aug 04 10:18:38 no-preload-499486 kubelet[1550]: I0804 10:18:38.685773    1550 scope.go:117] "RemoveContainer" containerID="80aef0e1e41b81cb0f8b058ed3f2dccceb3285abc8cabc20f2603666b99f4941"
	Aug 04 10:18:38 no-preload-499486 kubelet[1550]: E0804 10:18:38.685920    1550 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-no-preload-499486_kube-system(a4b1d6b4ed5bdfde5a36a79a8a11f1a7)\"" pod="kube-system/kube-controller-manager-no-preload-499486" podUID="a4b1d6b4ed5bdfde5a36a79a8a11f1a7"
	Aug 04 10:18:39 no-preload-499486 kubelet[1550]: E0804 10:18:39.206316    1550 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.94.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/no-preload-499486?timeout=10s\": dial tcp 192.168.94.2:8443: connect: connection refused" interval="7s"
	Aug 04 10:18:40 no-preload-499486 kubelet[1550]: E0804 10:18:40.685703    1550 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"no-preload-499486\" not found" node="no-preload-499486"
	Aug 04 10:18:40 no-preload-499486 kubelet[1550]: I0804 10:18:40.685781    1550 scope.go:117] "RemoveContainer" containerID="170e383b72244e90a4b5a27759222438dfdb8d4a28ad9820bdb56232fd5d66e7"
	Aug 04 10:18:40 no-preload-499486 kubelet[1550]: E0804 10:18:40.685925    1550 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=etcd pod=etcd-no-preload-499486_kube-system(c3193c4a9a9a9175b95883d7fe1bad87)\"" pod="kube-system/etcd-no-preload-499486" podUID="c3193c4a9a9a9175b95883d7fe1bad87"
	Aug 04 10:18:41 no-preload-499486 kubelet[1550]: E0804 10:18:41.780958    1550 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.94.2:8443/api/v1/namespaces/default/events/no-preload-499486.1858883701518ab4\": dial tcp 192.168.94.2:8443: connect: connection refused" event="&Event{ObjectMeta:{no-preload-499486.1858883701518ab4  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:no-preload-499486,UID:no-preload-499486,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node no-preload-499486 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:no-preload-499486,},FirstTimestamp:2025-08-04 10:03:44.687508148 +0000 UTC m=+0.105538214,LastTimestamp:2025-08-04 10:03:44.784778199 +0000 UTC m=+0.202808267,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:no-preload-499486,}"
	Aug 04 10:18:42 no-preload-499486 kubelet[1550]: E0804 10:18:42.688203    1550 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"no-preload-499486\" not found" node="no-preload-499486"
	Aug 04 10:18:42 no-preload-499486 kubelet[1550]: I0804 10:18:42.688283    1550 scope.go:117] "RemoveContainer" containerID="6ea0a675973d81dde80ae3a00c3d70b3770278bb2eb3abbd26498cec2d3752d3"
	Aug 04 10:18:42 no-preload-499486 kubelet[1550]: E0804 10:18:42.688433    1550 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-no-preload-499486_kube-system(f4c9aec0fc04dec0ce14ce1fda478878)\"" pod="kube-system/kube-apiserver-no-preload-499486" podUID="f4c9aec0fc04dec0ce14ce1fda478878"
	Aug 04 10:18:44 no-preload-499486 kubelet[1550]: E0804 10:18:44.759534    1550 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"no-preload-499486\" not found"
	Aug 04 10:18:45 no-preload-499486 kubelet[1550]: I0804 10:18:45.285443    1550 kubelet_node_status.go:75] "Attempting to register node" node="no-preload-499486"
	Aug 04 10:18:45 no-preload-499486 kubelet[1550]: E0804 10:18:45.285880    1550 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.94.2:8443/api/v1/nodes\": dial tcp 192.168.94.2:8443: connect: connection refused" node="no-preload-499486"
	Aug 04 10:18:46 no-preload-499486 kubelet[1550]: E0804 10:18:46.207174    1550 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.94.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/no-preload-499486?timeout=10s\": dial tcp 192.168.94.2:8443: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-499486 -n no-preload-499486
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-499486 -n no-preload-499486: exit status 2 (268.240674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "no-preload-499486" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (267.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:18:53.891369 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:18:55.497634 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:18:56.076079 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:19:00.560409 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/bridge-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:19:03.491611 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:19:05.086945 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:19:05.141343 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubenet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:19:15.254268 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:19:50.237426 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:20:16.956395 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:20:18.561082 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:20:30.664227 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:20:41.678020 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:20:43.934234 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:21:53.728546 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:21:58.363726 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:22:06.999594 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:22:08.154298 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": net/http: TLS handshake timeout
E0804 10:22:24.250611 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:22:24.543293 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.94.1:59530->192.168.94.2:8443: read: connection reset by peer
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
E0804 10:22:33.013092 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.94.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.94.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-499486 -n no-preload-499486
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-499486 -n no-preload-499486: exit status 2 (278.376968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-499486" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-499486 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-499486 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.84µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-499486 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-499486
helpers_test.go:235: (dbg) docker inspect no-preload-499486:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a",
	        "Created": "2025-08-04T09:53:15.660442354Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2149831,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-08-04T10:03:35.921334492Z",
	            "FinishedAt": "2025-08-04T10:03:34.718097407Z"
	        },
	        "Image": "sha256:da3843d6394f34289e593ae899877bec769ea93dbd69d427e43ba72c57cff8a2",
	        "ResolvConfPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/hostname",
	        "HostsPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/hosts",
	        "LogPath": "/var/lib/docker/containers/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a/cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a-json.log",
	        "Name": "/no-preload-499486",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-499486:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-499486",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cdcf9a40640ce3f1bbc1c3314aa5b5881f6cf5673ed1dc58ac7a101e948b388a",
	                "LowerDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb-init/diff:/var/lib/docker/overlay2/14186d2bed6bdd9b20ff44dd2ed07ccdaf1758422566a466fa49e19085ed482d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/189db452ae62e2ad0b8f60e32810a71c307c0ea432a613ec47c0b86215089fdb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-499486",
	                "Source": "/var/lib/docker/volumes/no-preload-499486/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-499486",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-499486",
	                "name.minikube.sigs.k8s.io": "no-preload-499486",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fac6055cf947ab02c491cbb5dd64cbf3c0ae98a2e42975ad1d99b1bdbe7a9bbd",
	            "SandboxKey": "/var/run/docker/netns/fac6055cf947",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-499486": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:00:36:b7:69:43",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b62d1a98319626f2ebd728777c7c3c44586a7c69bc74cc1eeb93ee4ca2df5d38",
	                    "EndpointID": "cd2f2866ae03228d2f1c745367746ee5866c33aa7baf64438d9f50fae785c9c7",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-499486",
	                        "cdcf9a40640c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499486 -n no-preload-499486
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499486 -n no-preload-499486: exit status 2 (260.692333ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-499486 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                          ARGS                                                                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ ssh     │ -p kubenet-561540 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                     │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo docker system info                                                                                                                                                                                                              │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                        │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                  │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cri-dockerd --version                                                                                                                                                                                                           │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat containerd --no-pager                                                                                                                                                                                             │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                      │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo cat /etc/containerd/config.toml                                                                                                                                                                                                 │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo containerd config dump                                                                                                                                                                                                          │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                   │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │                     │
	│ ssh     │ -p kubenet-561540 sudo systemctl cat crio --no-pager                                                                                                                                                                                                   │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                         │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ ssh     │ -p kubenet-561540 sudo crio config                                                                                                                                                                                                                     │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ delete  │ -p kubenet-561540                                                                                                                                                                                                                                      │ kubenet-561540    │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ stop    │ -p newest-cni-768931 --alsologtostderr -v=3                                                                                                                                                                                                            │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ addons  │ enable dashboard -p newest-cni-768931 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                           │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │ 04 Aug 25 10:04 UTC │
	│ start   │ -p newest-cni-768931 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0 │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:04 UTC │                     │
	│ image   │ newest-cni-768931 image list --format=json                                                                                                                                                                                                             │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:08 UTC │ 04 Aug 25 10:08 UTC │
	│ pause   │ -p newest-cni-768931 --alsologtostderr -v=1                                                                                                                                                                                                            │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:08 UTC │ 04 Aug 25 10:08 UTC │
	│ unpause │ -p newest-cni-768931 --alsologtostderr -v=1                                                                                                                                                                                                            │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:08 UTC │ 04 Aug 25 10:08 UTC │
	│ delete  │ -p newest-cni-768931                                                                                                                                                                                                                                   │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:09 UTC │ 04 Aug 25 10:09 UTC │
	│ delete  │ -p newest-cni-768931                                                                                                                                                                                                                                   │ newest-cni-768931 │ jenkins │ v1.36.0 │ 04 Aug 25 10:09 UTC │ 04 Aug 25 10:09 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 10:04:32
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 10:04:32.687485 2163332 out.go:345] Setting OutFile to fd 1 ...
	I0804 10:04:32.687601 2163332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 10:04:32.687610 2163332 out.go:358] Setting ErrFile to fd 2...
	I0804 10:04:32.687614 2163332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 10:04:32.687787 2163332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 10:04:32.688302 2163332 out.go:352] Setting JSON to false
	I0804 10:04:32.689384 2163332 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":153962,"bootTime":1754147911,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 10:04:32.689473 2163332 start.go:140] virtualization: kvm guest
	I0804 10:04:32.691276 2163332 out.go:177] * [newest-cni-768931] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 10:04:32.692852 2163332 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 10:04:32.692888 2163332 notify.go:220] Checking for updates...
	I0804 10:04:32.695015 2163332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 10:04:32.696142 2163332 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:32.697215 2163332 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 10:04:32.698321 2163332 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 10:04:32.699270 2163332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 10:04:32.700616 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:32.701052 2163332 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 10:04:32.723805 2163332 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 10:04:32.723883 2163332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 10:04:32.778232 2163332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 10:04:32.768372933 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 10:04:32.778341 2163332 docker.go:318] overlay module found
	I0804 10:04:32.779801 2163332 out.go:177] * Using the docker driver based on existing profile
	I0804 10:04:32.780788 2163332 start.go:304] selected driver: docker
	I0804 10:04:32.780822 2163332 start.go:918] validating driver "docker" against &{Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:32.780895 2163332 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 10:04:32.781839 2163332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 10:04:32.827839 2163332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 10:04:32.819484271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 10:04:32.828202 2163332 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0804 10:04:32.828229 2163332 cni.go:84] Creating CNI manager for ""
	I0804 10:04:32.828284 2163332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 10:04:32.828323 2163332 start.go:348] cluster config:
	{Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:32.830455 2163332 out.go:177] * Starting "newest-cni-768931" primary control-plane node in "newest-cni-768931" cluster
	I0804 10:04:32.831301 2163332 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 10:04:32.832264 2163332 out.go:177] * Pulling base image v0.0.47-1753871403-21198 ...
	I0804 10:04:32.833160 2163332 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 10:04:32.833198 2163332 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 10:04:32.833213 2163332 cache.go:56] Caching tarball of preloaded images
	I0804 10:04:32.833291 2163332 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 10:04:32.833335 2163332 preload.go:172] Found /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 10:04:32.833346 2163332 cache.go:59] Finished verifying existence of preloaded tar for v1.34.0-beta.0 on docker
	I0804 10:04:32.833466 2163332 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/config.json ...
	I0804 10:04:32.853043 2163332 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon, skipping pull
	I0804 10:04:32.853066 2163332 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in daemon, skipping load
	I0804 10:04:32.853089 2163332 cache.go:230] Successfully downloaded all kic artifacts
	I0804 10:04:32.853130 2163332 start.go:360] acquireMachinesLock for newest-cni-768931: {Name:mk60747b86b31a8b440009760f939cd98b70b1b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 10:04:32.853200 2163332 start.go:364] duration metric: took 46.728µs to acquireMachinesLock for "newest-cni-768931"
	I0804 10:04:32.853224 2163332 start.go:96] Skipping create...Using existing machine configuration
	I0804 10:04:32.853234 2163332 fix.go:54] fixHost starting: 
	I0804 10:04:32.853483 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:32.870192 2163332 fix.go:112] recreateIfNeeded on newest-cni-768931: state=Stopped err=<nil>
	W0804 10:04:32.870218 2163332 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 10:04:32.871722 2163332 out.go:177] * Restarting existing docker container for "newest-cni-768931" ...
	W0804 10:04:33.885027 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:04:32.872698 2163332 cli_runner.go:164] Run: docker start newest-cni-768931
	I0804 10:04:33.099718 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:33.118449 2163332 kic.go:430] container "newest-cni-768931" state is running.
	I0804 10:04:33.118905 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:33.137343 2163332 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/config.json ...
	I0804 10:04:33.137542 2163332 machine.go:93] provisionDockerMachine start ...
	I0804 10:04:33.137597 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:33.155160 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:33.155419 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:33.155437 2163332 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 10:04:33.156072 2163332 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58734->127.0.0.1:33169: read: connection reset by peer
	I0804 10:04:36.284896 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-768931
	
	I0804 10:04:36.284952 2163332 ubuntu.go:169] provisioning hostname "newest-cni-768931"
	I0804 10:04:36.285030 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.302808 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.303033 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.303047 2163332 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-768931 && echo "newest-cni-768931" | sudo tee /etc/hostname
	I0804 10:04:36.436070 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-768931
	
	I0804 10:04:36.436155 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.453360 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.453580 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.453597 2163332 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-768931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-768931/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-768931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 10:04:36.577177 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 10:04:36.577204 2163332 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21223-1578987/.minikube CaCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21223-1578987/.minikube}
	I0804 10:04:36.577269 2163332 ubuntu.go:177] setting up certificates
	I0804 10:04:36.577284 2163332 provision.go:84] configureAuth start
	I0804 10:04:36.577338 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:36.594945 2163332 provision.go:143] copyHostCerts
	I0804 10:04:36.595024 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem, removing ...
	I0804 10:04:36.595052 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem
	I0804 10:04:36.595122 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.pem (1082 bytes)
	I0804 10:04:36.595229 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem, removing ...
	I0804 10:04:36.595240 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem
	I0804 10:04:36.595279 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/cert.pem (1123 bytes)
	I0804 10:04:36.595353 2163332 exec_runner.go:144] found /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem, removing ...
	I0804 10:04:36.595363 2163332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem
	I0804 10:04:36.595397 2163332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21223-1578987/.minikube/key.pem (1675 bytes)
	I0804 10:04:36.595465 2163332 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem org=jenkins.newest-cni-768931 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-768931]
	I0804 10:04:36.675231 2163332 provision.go:177] copyRemoteCerts
	I0804 10:04:36.675299 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 10:04:36.675408 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.693281 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:36.786243 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 10:04:36.808201 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 10:04:36.829564 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 10:04:36.851320 2163332 provision.go:87] duration metric: took 274.022098ms to configureAuth
	I0804 10:04:36.851348 2163332 ubuntu.go:193] setting minikube options for container-runtime
	I0804 10:04:36.851551 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:36.851596 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:36.868506 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:36.868714 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:36.868725 2163332 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 10:04:36.993642 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0804 10:04:36.993669 2163332 ubuntu.go:71] root file system type: overlay
	I0804 10:04:36.993814 2163332 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 10:04:36.993894 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.011512 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:37.011804 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:37.011909 2163332 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 10:04:37.144143 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 10:04:37.144254 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.163872 2163332 main.go:141] libmachine: Using SSH client type: native
	I0804 10:04:37.164133 2163332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83c0c0] 0x83edc0 <nil>  [] 0s} 127.0.0.1 33169 <nil> <nil>}
	I0804 10:04:37.164159 2163332 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 10:04:37.294409 2163332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 10:04:37.294438 2163332 machine.go:96] duration metric: took 4.156880869s to provisionDockerMachine
	I0804 10:04:37.294451 2163332 start.go:293] postStartSetup for "newest-cni-768931" (driver="docker")
	I0804 10:04:37.294467 2163332 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 10:04:37.294538 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 10:04:37.294594 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.312083 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.402431 2163332 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 10:04:37.405677 2163332 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0804 10:04:37.405711 2163332 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0804 10:04:37.405722 2163332 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0804 10:04:37.405732 2163332 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0804 10:04:37.405748 2163332 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/addons for local assets ...
	I0804 10:04:37.405809 2163332 filesync.go:126] Scanning /home/jenkins/minikube-integration/21223-1578987/.minikube/files for local assets ...
	I0804 10:04:37.405901 2163332 filesync.go:149] local asset: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem -> 15826902.pem in /etc/ssl/certs
	I0804 10:04:37.406013 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 10:04:37.414129 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 10:04:37.436137 2163332 start.go:296] duration metric: took 141.67054ms for postStartSetup
	I0804 10:04:37.436224 2163332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 10:04:37.436265 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.453687 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.541885 2163332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0804 10:04:37.546057 2163332 fix.go:56] duration metric: took 4.692814355s for fixHost
	I0804 10:04:37.546084 2163332 start.go:83] releasing machines lock for "newest-cni-768931", held for 4.692869693s
	I0804 10:04:37.546159 2163332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-768931
	I0804 10:04:37.563070 2163332 ssh_runner.go:195] Run: cat /version.json
	I0804 10:04:37.563126 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.563138 2163332 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 10:04:37.563203 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:37.580936 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.581156 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:37.740866 2163332 ssh_runner.go:195] Run: systemctl --version
	I0804 10:04:37.745223 2163332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 10:04:37.749326 2163332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0804 10:04:37.766095 2163332 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0804 10:04:37.766176 2163332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 10:04:37.773788 2163332 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 10:04:37.773820 2163332 start.go:495] detecting cgroup driver to use...
	I0804 10:04:37.773849 2163332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 10:04:37.773948 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 10:04:37.788117 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:38.201785 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0804 10:04:38.211955 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 10:04:38.221176 2163332 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 10:04:38.221223 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 10:04:38.230298 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 10:04:38.238908 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 10:04:38.247614 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 10:04:38.256328 2163332 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 10:04:38.264446 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 10:04:38.273173 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 10:04:38.282132 2163332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 10:04:38.290867 2163332 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 10:04:38.298323 2163332 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 10:04:38.305902 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:38.392109 2163332 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 10:04:38.481905 2163332 start.go:495] detecting cgroup driver to use...
	I0804 10:04:38.481959 2163332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0804 10:04:38.482006 2163332 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 10:04:38.492886 2163332 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0804 10:04:38.492964 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 10:04:38.507193 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 10:04:38.524383 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:38.965725 2163332 ssh_runner.go:195] Run: which cri-dockerd
	I0804 10:04:38.969614 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 10:04:38.977908 2163332 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0804 10:04:38.993935 2163332 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 10:04:39.070708 2163332 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 10:04:39.151070 2163332 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 10:04:39.151179 2163332 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 10:04:39.167734 2163332 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0804 10:04:39.179347 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.254327 2163332 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 10:04:39.556127 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 10:04:39.566948 2163332 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0804 10:04:39.577711 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 10:04:39.587256 2163332 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 10:04:39.666843 2163332 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 10:04:39.760652 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.840823 2163332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 10:04:39.853363 2163332 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0804 10:04:39.863091 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:39.939093 2163332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 10:04:39.998099 2163332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 10:04:40.009070 2163332 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 10:04:40.009141 2163332 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 10:04:40.012496 2163332 start.go:563] Will wait 60s for crictl version
	I0804 10:04:40.012547 2163332 ssh_runner.go:195] Run: which crictl
	I0804 10:04:40.015480 2163332 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 10:04:40.047607 2163332 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.3
	RuntimeApiVersion:  v1
	I0804 10:04:40.047667 2163332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 10:04:40.071117 2163332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 10:04:40.096346 2163332 out.go:235] * Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.3 ...
	I0804 10:04:40.096430 2163332 cli_runner.go:164] Run: docker network inspect newest-cni-768931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0804 10:04:40.113799 2163332 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0804 10:04:40.117316 2163332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 10:04:40.128718 2163332 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0804 10:04:40.129838 2163332 kubeadm.go:875] updating cluster {Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 10:04:40.130050 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:40.510582 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:40.900777 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:41.302831 2163332 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 10:04:41.303034 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:41.705389 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:42.114511 2163332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I0804 10:04:42.516831 2163332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 10:04:42.537600 2163332 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 10:04:42.537629 2163332 docker.go:633] Images already preloaded, skipping extraction
	I0804 10:04:42.537693 2163332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 10:04:42.556805 2163332 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0-beta.0
	registry.k8s.io/kube-scheduler:v1.34.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
	registry.k8s.io/kube-proxy:v1.34.0-beta.0
	registry.k8s.io/etcd:3.6.1-1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.1
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 10:04:42.556830 2163332 cache_images.go:85] Images are preloaded, skipping loading
	I0804 10:04:42.556843 2163332 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0-beta.0 docker true true} ...
	I0804 10:04:42.556981 2163332 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-768931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 10:04:42.557048 2163332 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 10:04:42.603960 2163332 cni.go:84] Creating CNI manager for ""
	I0804 10:04:42.603991 2163332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 10:04:42.604000 2163332 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0804 10:04:42.604024 2163332 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-768931 NodeName:newest-cni-768931 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 10:04:42.604182 2163332 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-768931"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.34.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 10:04:42.604258 2163332 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0-beta.0
	I0804 10:04:42.612607 2163332 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 10:04:42.612659 2163332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 10:04:42.620777 2163332 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0804 10:04:42.637111 2163332 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0804 10:04:42.652929 2163332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2300 bytes)
	I0804 10:04:42.669016 2163332 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0804 10:04:42.672189 2163332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 10:04:42.681993 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:42.752820 2163332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 10:04:42.766032 2163332 certs.go:68] Setting up /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931 for IP: 192.168.76.2
	I0804 10:04:42.766057 2163332 certs.go:194] generating shared ca certs ...
	I0804 10:04:42.766079 2163332 certs.go:226] acquiring lock for ca certs: {Name:mk3514dc1566d1f516f7ba0017c185c9e1cf2eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:42.766266 2163332 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key
	I0804 10:04:42.766336 2163332 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key
	I0804 10:04:42.766352 2163332 certs.go:256] generating profile certs ...
	I0804 10:04:42.766461 2163332 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/client.key
	I0804 10:04:42.766532 2163332 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key.a5c16e02
	I0804 10:04:42.766586 2163332 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.key
	I0804 10:04:42.766711 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem (1338 bytes)
	W0804 10:04:42.766752 2163332 certs.go:480] ignoring /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690_empty.pem, impossibly tiny 0 bytes
	I0804 10:04:42.766766 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 10:04:42.766803 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/ca.pem (1082 bytes)
	I0804 10:04:42.766837 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/cert.pem (1123 bytes)
	I0804 10:04:42.766912 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/key.pem (1675 bytes)
	I0804 10:04:42.766983 2163332 certs.go:484] found cert: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem (1708 bytes)
	I0804 10:04:42.767635 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 10:04:42.790829 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 10:04:42.814436 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 10:04:42.873985 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 10:04:42.962257 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 10:04:42.987204 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 10:04:43.010504 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 10:04:43.032579 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/newest-cni-768931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 10:04:43.054052 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/ssl/certs/15826902.pem --> /usr/share/ca-certificates/15826902.pem (1708 bytes)
	I0804 10:04:43.074805 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 10:04:43.095457 2163332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21223-1578987/.minikube/certs/1582690.pem --> /usr/share/ca-certificates/1582690.pem (1338 bytes)
	I0804 10:04:43.116289 2163332 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 10:04:43.132026 2163332 ssh_runner.go:195] Run: openssl version
	I0804 10:04:43.137020 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15826902.pem && ln -fs /usr/share/ca-certificates/15826902.pem /etc/ssl/certs/15826902.pem"
	I0804 10:04:43.145170 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.148316 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 08:46 /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.148363 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15826902.pem
	I0804 10:04:43.154461 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15826902.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 10:04:43.162454 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 10:04:43.170868 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.174158 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 08:36 /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.174205 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 10:04:43.180335 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 10:04:43.188046 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582690.pem && ln -fs /usr/share/ca-certificates/1582690.pem /etc/ssl/certs/1582690.pem"
	I0804 10:04:43.196142 2163332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.199374 2163332 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 08:46 /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.199418 2163332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582690.pem
	I0804 10:04:43.205534 2163332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1582690.pem /etc/ssl/certs/51391683.0"
	I0804 10:04:43.213018 2163332 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 10:04:43.215961 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 10:04:43.221714 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 10:04:43.227380 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 10:04:43.233506 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 10:04:43.239207 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 10:04:43.245036 2163332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 10:04:43.250834 2163332 kubeadm.go:392] StartCluster: {Name:newest-cni-768931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:newest-cni-768931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 10:04:43.250956 2163332 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 10:04:43.269121 2163332 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 10:04:43.277263 2163332 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 10:04:43.277283 2163332 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0804 10:04:43.277330 2163332 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 10:04:43.285660 2163332 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 10:04:43.286263 2163332 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-768931" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:43.286552 2163332 kubeconfig.go:62] /home/jenkins/minikube-integration/21223-1578987/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-768931" cluster setting kubeconfig missing "newest-cni-768931" context setting]
	I0804 10:04:43.286984 2163332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.288423 2163332 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 10:04:43.298821 2163332 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0804 10:04:43.298859 2163332 kubeadm.go:593] duration metric: took 21.569333ms to restartPrimaryControlPlane
	I0804 10:04:43.298870 2163332 kubeadm.go:394] duration metric: took 48.062594ms to StartCluster
	I0804 10:04:43.298890 2163332 settings.go:142] acquiring lock: {Name:mk3d97f9903fe59355ed92bb92489c9b9834574a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.298958 2163332 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 10:04:43.300110 2163332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/kubeconfig: {Name:mkf24ee14e943044a215c85a2f6a4d01263dd54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 10:04:43.300900 2163332 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 10:04:43.300973 2163332 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 10:04:43.301073 2163332 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-768931"
	I0804 10:04:43.301106 2163332 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-768931"
	I0804 10:04:43.301136 2163332 config.go:182] Loaded profile config "newest-cni-768931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 10:04:43.301159 2163332 addons.go:69] Setting dashboard=true in profile "newest-cni-768931"
	I0804 10:04:43.301172 2163332 addons.go:238] Setting addon dashboard=true in "newest-cni-768931"
	W0804 10:04:43.301179 2163332 addons.go:247] addon dashboard should already be in state true
	I0804 10:04:43.301151 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.301204 2163332 addons.go:69] Setting default-storageclass=true in profile "newest-cni-768931"
	I0804 10:04:43.301216 2163332 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-768931"
	I0804 10:04:43.301196 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.301557 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.301866 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.302384 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.303179 2163332 out.go:177] * Verifying Kubernetes components...
	I0804 10:04:43.305197 2163332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 10:04:43.324564 2163332 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 10:04:43.325432 2163332 addons.go:238] Setting addon default-storageclass=true in "newest-cni-768931"
	I0804 10:04:43.325477 2163332 host.go:66] Checking if "newest-cni-768931" exists ...
	I0804 10:04:43.325866 2163332 cli_runner.go:164] Run: docker container inspect newest-cni-768931 --format={{.State.Status}}
	I0804 10:04:43.326227 2163332 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:43.326249 2163332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 10:04:43.326263 2163332 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0804 10:04:43.326303 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.330702 2163332 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	W0804 10:04:43.886614 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:04:43.332193 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0804 10:04:43.332226 2163332 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0804 10:04:43.332289 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.352412 2163332 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:43.352439 2163332 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 10:04:43.352511 2163332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-768931
	I0804 10:04:43.354098 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.357876 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.376872 2163332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/newest-cni-768931/id_rsa Username:docker}
	I0804 10:04:43.566637 2163332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 10:04:43.579924 2163332 api_server.go:52] waiting for apiserver process to appear ...
	I0804 10:04:43.580007 2163332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 10:04:43.587036 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:43.661862 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:43.763049 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0804 10:04:43.763163 2163332 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0804 10:04:43.788243 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0804 10:04:43.788319 2163332 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W0804 10:04:43.865293 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.865365 2163332 retry.go:31] will retry after 305.419917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.872538 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0804 10:04:43.872570 2163332 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0804 10:04:43.875393 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.875428 2163332 retry.go:31] will retry after 145.860796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:43.893731 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0804 10:04:43.893755 2163332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0804 10:04:43.974563 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0804 10:04:43.974597 2163332 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0804 10:04:44.022021 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:44.068260 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0804 10:04:44.068309 2163332 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0804 10:04:44.080910 2163332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 10:04:44.164887 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0804 10:04:44.164970 2163332 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0804 10:04:44.171091 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:04:44.277704 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0804 10:04:44.277741 2163332 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0804 10:04:44.368026 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:44.368071 2163332 retry.go:31] will retry after 204.750775ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:44.368122 2163332 api_server.go:72] duration metric: took 1.067187806s to wait for apiserver process to appear ...
	I0804 10:04:44.368138 2163332 api_server.go:88] waiting for apiserver healthz status ...
	I0804 10:04:44.368158 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:44.368545 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:04:44.383288 2163332 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:04:44.383317 2163332 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0804 10:04:44.480138 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:04:44.573381 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:04:44.869120 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:45.817807 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (21.02485888s)
	W0804 10:04:45.817865 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47830->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817882 2149628 retry.go:31] will retry after 7.331884675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47830->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817886 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (18.577242103s)
	W0804 10:04:45.817921 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47842->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.817941 2149628 retry.go:31] will retry after 8.626487085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47842->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.819147 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (15.673641591s)
	W0804 10:04:45.819203 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:45.819221 2149628 retry.go:31] will retry after 10.775617277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:47846->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:46.383837 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:04:48.883614 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:49.869344 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:49.869418 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:04:51.383255 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:53.150556 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:04:53.202901 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:53.202938 2149628 retry.go:31] will retry after 10.556999875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:53.383788 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:54.445142 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:04:54.496071 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:54.496106 2149628 retry.go:31] will retry after 19.784775984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:55.384040 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:54.871144 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:54.871202 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:04:56.595610 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:04:56.648210 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:04:56.648246 2149628 retry.go:31] will retry after 19.28607151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:04:57.883186 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:04:59.883484 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:04:59.871849 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:04:59.871895 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:05:02.383555 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:03.761004 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:03.814105 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:03.814138 2149628 retry.go:31] will retry after 18.372442886s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:04.883286 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:04.478042 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (20.306910761s)
	W0804 10:05:04.478091 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.478126 2163332 retry.go:31] will retry after 410.995492ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.672813 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (20.192633915s)
	W0804 10:05:04.672867 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.672888 2163332 retry.go:31] will retry after 182.584114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.703068 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (20.129638597s)
	W0804 10:05:04.703115 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.703134 2163332 retry.go:31] will retry after 523.614331ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": net/http: TLS handshake timeout; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:04.856484 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:04.872959 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:04.873004 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:04.889864 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:05.192954 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:37594->192.168.76.2:8443: read: connection reset by peer
	I0804 10:05:05.227229 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:05:05.369063 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:05.369560 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:05.868214 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:05.868705 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:06.201020 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.344463633s)
	W0804 10:05:06.201082 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201113 2163332 retry.go:31] will retry after 482.284125ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201118 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.311218695s)
	W0804 10:05:06.201165 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:06.201186 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201211 2163332 retry.go:31] will retry after 887.479058ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.201194 2163332 retry.go:31] will retry after 435.691438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.368292 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:06.368825 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:06.637302 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:06.683768 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:06.697149 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.697200 2163332 retry.go:31] will retry after 912.303037ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:06.737524 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.737566 2163332 retry.go:31] will retry after 625.926598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:06.868554 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:06.869018 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:07.089442 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:07.144156 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.144195 2163332 retry.go:31] will retry after 785.129731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.364509 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:07.368843 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:07.369217 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:07.420384 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.420426 2163332 retry.go:31] will retry after 1.204230636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.610548 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:07.663536 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.663566 2163332 retry.go:31] will retry after 847.493782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:07.384053 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:07.868944 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:07.869396 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:07.929533 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:07.992350 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:07.992381 2163332 retry.go:31] will retry after 1.598370768s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.368829 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:08.369322 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:08.511490 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:08.563819 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.563859 2163332 retry.go:31] will retry after 2.394822068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.625020 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:08.680531 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.680572 2163332 retry.go:31] will retry after 1.418436203s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:08.868633 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:08.869103 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:09.368624 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:09.369142 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:09.591529 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:09.645331 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:09.645367 2163332 retry.go:31] will retry after 3.361261664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:09.868611 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:09.869088 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.099510 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:10.154439 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:10.154474 2163332 retry.go:31] will retry after 1.332951383s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:10.368786 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:10.369300 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.869015 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:10.869515 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:10.959750 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:11.011704 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.011736 2163332 retry.go:31] will retry after 3.283196074s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.369218 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:11.369738 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:11.487993 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:11.543582 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.543631 2163332 retry.go:31] will retry after 1.836854478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:11.869009 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:11.869527 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:12.369134 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:12.369608 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.284114 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 10:05:12.868285 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:12.868757 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:13.007033 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:13.060825 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.060859 2163332 retry.go:31] will retry after 5.419314165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.368273 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:13.368846 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:13.381071 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:13.436653 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.436740 2163332 retry.go:31] will retry after 4.903205255s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:13.869165 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:13.869693 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.295170 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:14.348620 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:14.348654 2163332 retry.go:31] will retry after 3.265872015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:14.368685 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:14.369071 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:14.868586 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:14.869001 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:15.368516 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:15.368980 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:15.868561 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:15.869023 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:16.368523 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:16.368989 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:16.868494 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:16.868945 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:17.368464 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:17.368952 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:17.615361 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:17.669075 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:17.669112 2163332 retry.go:31] will retry after 4.169004534s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:15.935132 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:17.885492 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:05:17.868530 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:17.869032 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:18.340601 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:18.368999 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:18.369438 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:18.395142 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.395177 2163332 retry.go:31] will retry after 4.503631797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.480301 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:18.532269 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.532303 2163332 retry.go:31] will retry after 6.221358918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:18.868632 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:18.869050 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:19.368539 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:19.369007 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:19.868600 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:19.869064 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:20.368560 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:20.369023 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:20.868636 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:20.869103 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:21.368674 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:21.369151 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:21.838756 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:21.869088 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:21.869590 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	W0804 10:05:21.892280 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:21.892309 2163332 retry.go:31] will retry after 7.287119503s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:22.368833 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:22.369350 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:22.187953 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:22.869045 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:22.869518 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:22.899745 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:22.973354 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:22.973440 2163332 retry.go:31] will retry after 5.491383729s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:23.368948 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:24.754708 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:27.887543 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:05:29.439408 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (15.15524051s)
	W0804 10:05:29.439455 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45456->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:29.439566 2149628 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45456->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:05:29.441507 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (13.506331682s)
	W0804 10:05:29.441560 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:29.441583 2149628 retry.go:31] will retry after 14.271169565s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45488->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:29.441585 2149628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.253590877s)
	W0804 10:05:29.441617 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:29.441700 2149628 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W0804 10:05:30.383305 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:28.370244 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:28.370296 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:28.465977 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:05:29.179675 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0804 10:05:32.383952 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:34.883276 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:33.371314 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:33.371380 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:05:36.883454 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:38.883897 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:38.372462 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:38.372528 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:05:41.383199 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:43.713667 2149628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:43.766398 2149628 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:43.766528 2149628 out.go:270] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:05:43.769126 2149628 out.go:177] * Enabled addons: 
	I0804 10:05:43.770026 2149628 addons.go:514] duration metric: took 1m58.647363457s for enable addons: enabled=[]
	W0804 10:05:43.883892 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:43.373289 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:05:43.373454 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:44.936710 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (20.181960154s)
	W0804 10:05:44.936754 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52098->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.936774 2163332 retry.go:31] will retry after 12.603121969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52098->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939850 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (16.473803888s)
	I0804 10:05:44.939875 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (15.760161568s)
	W0804 10:05:44.939908 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52114->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:44.939909 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939927 2163332 ssh_runner.go:235] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (1.566452819s)
	I0804 10:05:44.939927 2163332 retry.go:31] will retry after 11.974707637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52114->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939942 2163332 retry.go:31] will retry after 10.364414585s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52104->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:44.939952 2163332 logs.go:282] 2 containers: [649f5e5c295c 059756d38779]
	I0804 10:05:44.940008 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:44.959696 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:44.959763 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:44.981336 2163332 logs.go:282] 0 containers: []
	W0804 10:05:44.981364 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:44.981422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:45.001103 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:45.001170 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:45.019261 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.019295 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:45.019341 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:45.037700 2163332 logs.go:282] 2 containers: [69f71bfef17b e3a6308944b3]
	I0804 10:05:45.037776 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:45.055759 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.055792 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:45.055847 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:45.073894 2163332 logs.go:282] 0 containers: []
	W0804 10:05:45.073922 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:45.073935 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:45.073949 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:45.129417 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:45.122097    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.122637    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124224    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124675    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.126118    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:45.122097    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.122637    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124224    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.124675    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:45.126118    3079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:45.129437 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:45.129450 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:45.156907 2163332 logs.go:123] Gathering logs for kube-apiserver [059756d38779] ...
	I0804 10:05:45.156940 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059756d38779"
	W0804 10:05:45.175729 2163332 logs.go:130] failed kube-apiserver [059756d38779]: command: /bin/bash -c "docker logs --tail 400 059756d38779" /bin/bash -c "docker logs --tail 400 059756d38779": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 059756d38779
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 059756d38779
	
	** /stderr **
	I0804 10:05:45.175748 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:45.175765 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:45.195944 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:45.195970 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:45.215671 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:45.215703 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:45.256918 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:45.256951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:45.283079 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:45.283122 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:45.318677 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:45.318712 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:45.370577 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:45.370621 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:45.391591 2163332 logs.go:123] Gathering logs for kube-controller-manager [e3a6308944b3] ...
	I0804 10:05:45.391616 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a6308944b3"
	I0804 10:05:45.412276 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:45.412300 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 10:05:46.384002 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:48.883850 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:47.962390 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:47.962840 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:47.962936 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:47.981464 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:47.981534 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:47.999231 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:47.999296 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:48.017739 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.017764 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:48.017806 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:48.036069 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:48.036151 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:48.053625 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.053651 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:48.053706 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:48.072069 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:48.072161 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:48.089963 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.089985 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:48.090033 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:48.107912 2163332 logs.go:282] 0 containers: []
	W0804 10:05:48.107934 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:48.107956 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:48.107972 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:48.164032 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:48.156591    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.157104    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.158718    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.159117    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.160609    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:48.156591    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.157104    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.158718    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.159117    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:48.160609    3276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:48.164052 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:48.164068 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:48.189481 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:48.189509 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:48.223302 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:48.223340 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:48.243043 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:48.243072 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:48.279568 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:48.279605 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:48.305730 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:48.305759 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:48.326737 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:48.326763 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:48.376057 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:48.376092 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:48.397266 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:48.397297 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:50.949382 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:50.949902 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:50.950009 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:50.969779 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:50.969854 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:50.988509 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:50.988586 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:51.006536 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.006565 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:51.006613 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:51.024853 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:51.024921 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:51.042617 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.042645 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:51.042689 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:51.060511 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:51.060599 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:51.079005 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.079031 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:51.079092 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:51.096451 2163332 logs.go:282] 0 containers: []
	W0804 10:05:51.096474 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:51.096489 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:51.096500 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:51.152017 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:51.152057 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:51.202478 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:51.202527 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:51.224042 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:51.224069 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:51.244633 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:51.244664 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:51.263948 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:51.263981 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:51.300099 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:51.300130 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:51.327538 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:51.327568 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:51.383029 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:51.375959    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.376437    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.377941    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.378408    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.379910    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:51.375959    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.376437    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.377941    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.378408    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:51.379910    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:51.383051 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:51.383067 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:51.408284 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:51.408314 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	W0804 10:05:51.384023 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:53.883929 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:53.941653 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:53.942148 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:53.942243 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:53.961471 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:53.961551 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:53.979438 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:53.979526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:53.997538 2163332 logs.go:282] 0 containers: []
	W0804 10:05:53.997559 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:53.997604 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:54.016326 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:54.016411 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:54.033583 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.033612 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:54.033663 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:54.051020 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:54.051103 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:54.068091 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.068118 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:54.068166 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:54.085797 2163332 logs.go:282] 0 containers: []
	W0804 10:05:54.085822 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:54.085842 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:54.085855 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:54.111832 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:54.111861 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:54.137672 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:54.137701 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:54.158028 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:54.158058 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:54.212546 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:54.212579 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:54.231855 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:54.231886 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:54.282575 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:54.282614 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:54.338570 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:54.331379    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.331842    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333378    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333781    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.335263    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:54.331379    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.331842    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333378    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.333781    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:54.335263    3679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:54.338591 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:54.338604 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:54.373298 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:54.373329 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:54.393825 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:54.393848 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:55.304830 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0804 10:05:55.358381 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:55.358414 2163332 retry.go:31] will retry after 25.619477771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.915875 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:05:56.931223 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:56.931695 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:56.931788 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W0804 10:05:56.971520 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.971555 2163332 retry.go:31] will retry after 22.721182959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:56.971565 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:56.971637 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:56.989778 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:56.989869 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:05:57.007294 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.007316 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:05:57.007359 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:05:57.024882 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:05:57.024964 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:05:57.042858 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.042881 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:05:57.042935 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:05:57.061232 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:05:57.061331 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:05:57.078841 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.078870 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:05:57.078919 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:05:57.096724 2163332 logs.go:282] 0 containers: []
	W0804 10:05:57.096754 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:05:57.096778 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:05:57.096790 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:05:57.150588 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:05:57.150621 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:05:57.176804 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:05:57.176833 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:05:57.233732 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:05:57.225639    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.226657    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228215    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228620    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.230079    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:05:57.225639    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.226657    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228215    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.228620    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:05:57.230079    3851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:05:57.233755 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:05:57.233768 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:05:57.270073 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:05:57.270109 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:05:57.290426 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:05:57.290461 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:05:57.327258 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:05:57.327286 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:05:57.353115 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:05:57.353143 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:05:57.373360 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:05:57.373392 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:05:57.423101 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:05:57.423133 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:05:57.540679 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:05:57.593367 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:05:57.593411 2163332 retry.go:31] will retry after 18.437511284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:05:55.884024 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:05:58.383443 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:05:59.945876 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:05:59.946354 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:05:59.946446 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:05:59.966005 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:05:59.966091 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:05:59.985617 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:05:59.985701 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:00.004828 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.004855 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:00.004906 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:00.023587 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:00.023651 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:00.041659 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.041680 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:00.041727 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:00.059493 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:00.059562 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:00.076712 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.076736 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:00.076779 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:00.095203 2163332 logs.go:282] 0 containers: []
	W0804 10:06:00.095222 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:00.095237 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:00.095248 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:00.113747 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:00.113775 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:00.150407 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:00.150433 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:00.202445 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:00.202486 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:00.229719 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:00.229755 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:00.255849 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:00.255878 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:00.276091 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:00.276119 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:00.297957 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:00.297986 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:00.353933 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:00.346687    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.347273    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.348805    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.349306    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.350820    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:00.346687    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.347273    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.348805    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.349306    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:00.350820    4096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:00.353953 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:00.353968 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:00.390814 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:00.390846 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	W0804 10:06:00.883216 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:03.383100 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:05.383181 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:02.945900 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:02.946356 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:02.946453 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:02.965471 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:06:02.965535 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:02.983934 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:06:02.984001 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:03.002213 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.002237 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:03.002285 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:03.021772 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:03.021856 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:03.039529 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.039554 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:03.039612 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:03.057939 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:03.058004 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:03.076289 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.076310 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:03.076355 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:03.094117 2163332 logs.go:282] 0 containers: []
	W0804 10:06:03.094146 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:03.094167 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:03.094182 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:03.130756 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:03.130783 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:03.187120 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:03.179355    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.179917    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181530    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181944    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.183460    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:03.179355    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.179917    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181530    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.181944    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:03.183460    4232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:03.187140 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:03.187153 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:03.207770 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:03.207804 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:03.244606 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:03.244642 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:03.295650 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:03.295686 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:03.351809 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:03.351844 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:03.379889 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:03.379922 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:03.406739 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:03.406767 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:03.427941 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:03.427967 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:05.948009 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:05.948483 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:05.948575 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:05.967373 2163332 logs.go:282] 1 containers: [649f5e5c295c]
	I0804 10:06:05.967442 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:05.985899 2163332 logs.go:282] 1 containers: [bf239ceabd31]
	I0804 10:06:05.985979 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:06.004170 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.004194 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:06.004250 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:06.022314 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:06.022386 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:06.039940 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.039963 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:06.040005 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:06.058068 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:06.058144 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:06.076569 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.076591 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:06.076631 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:06.094127 2163332 logs.go:282] 0 containers: []
	W0804 10:06:06.094153 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:06.094179 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:06.094193 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	I0804 10:06:06.119164 2163332 logs.go:123] Gathering logs for etcd [bf239ceabd31] ...
	I0804 10:06:06.119195 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf239ceabd31"
	I0804 10:06:06.140482 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:06.140517 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:06.190516 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:06.190551 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:06.212353 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:06.212385 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:06.248893 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:06.248919 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:06.302627 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:06.302664 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:06.329602 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:06.329633 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:06.385087 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:06.377651    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.378359    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.379718    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.380186    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.381710    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:06.377651    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.378359    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.379718    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.380186    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:06.381710    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:06.385113 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:06.385131 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:06.421810 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:06.421843 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:06:07.384103 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:09.883971 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:08.941210 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W0804 10:06:11.884134 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:14.383873 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:13.941780 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 10:06:13.941906 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:13.960880 2163332 logs.go:282] 2 containers: [806e7ebaaed1 649f5e5c295c]
	I0804 10:06:13.960962 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:13.979358 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:13.979441 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:13.996946 2163332 logs.go:282] 0 containers: []
	W0804 10:06:13.996972 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:13.997025 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:14.015595 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:14.015668 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:14.034223 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.034246 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:14.034288 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:14.052124 2163332 logs.go:282] 1 containers: [69f71bfef17b]
	I0804 10:06:14.052200 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:14.069965 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.069989 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:14.070032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:14.088436 2163332 logs.go:282] 0 containers: []
	W0804 10:06:14.088459 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:14.088473 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:14.088503 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:14.146648 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:14.146701 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:14.173008 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:14.173051 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 10:06:16.031588 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0804 10:06:16.384007 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:19.693397 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0804 10:06:20.978525 2163332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0804 10:06:28.857368 2163332 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (14.684287631s)
	W0804 10:06:28.857442 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:24.221601    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:06:28.850442    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49502->[::1]:8443: read: connection reset by peer"
	E0804 10:06:28.851023    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.852675    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.853078    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:24.221601    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:06:28.850442    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49502->[::1]:8443: read: connection reset by peer"
	E0804 10:06:28.851023    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.852675    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:28.853078    4720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:28.857455 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:28.857466 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:28.857477 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.825848081s)
	W0804 10:06:28.857515 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49512->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	I0804 10:06:28.857580 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.164140796s)
	W0804 10:06:28.857620 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:06:28.857662 2163332 out.go:270] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:49512->[::1]:8443: read: connection reset by peer; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W0804 10:06:28.857709 2163332 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:06:28.857875 2163332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.879306724s)
	W0804 10:06:28.857914 2163332 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0804 10:06:28.857989 2163332 out.go:270] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0804 10:06:28.860496 2163332 out.go:177] * Enabled addons: 
	W0804 10:06:28.885498 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:06:28.861918 2163332 addons.go:514] duration metric: took 1m45.560958591s for enable addons: enabled=[]
	I0804 10:06:28.878501 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:28.878527 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:28.917388 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:28.917421 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:28.938499 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:28.938540 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:28.979902 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:28.979935 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:29.005867 2163332 logs.go:123] Gathering logs for kube-apiserver [649f5e5c295c] ...
	I0804 10:06:29.005903 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5e5c295c"
	W0804 10:06:29.025877 2163332 logs.go:130] failed kube-apiserver [649f5e5c295c]: command: /bin/bash -c "docker logs --tail 400 649f5e5c295c" /bin/bash -c "docker logs --tail 400 649f5e5c295c": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 649f5e5c295c
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 649f5e5c295c
	
	** /stderr **
	I0804 10:06:29.025904 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:29.025916 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:29.076718 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:29.076759 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:31.597358 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:31.597799 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:31.597939 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:31.617008 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:31.617067 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:31.635937 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:31.636004 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:31.654450 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.654474 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:31.654531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:31.673162 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:31.673288 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:31.690681 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.690706 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:31.690759 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:31.712018 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:31.712111 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:31.729547 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.729576 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:31.729625 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:31.747479 2163332 logs.go:282] 0 containers: []
	W0804 10:06:31.747501 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:31.747513 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:31.747525 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:31.773882 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:31.773913 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:31.828620 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:31.821229    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.821688    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823253    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823731    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.825214    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:31.821229    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.821688    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823253    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.823731    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:31.825214    5036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:31.828641 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:31.828655 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:31.854157 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:31.854190 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:31.873980 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:31.874004 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:31.910304 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:31.910342 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:31.931218 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:31.931246 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:31.969061 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:31.969091 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:32.019399 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:32.019436 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:32.040462 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:32.040488 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:32.059511 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:32.059540 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:34.622382 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:34.622843 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:34.622941 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:34.642832 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:34.642895 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:34.660588 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:34.660660 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:34.678855 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.678878 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:34.678922 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:34.698191 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:34.698282 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:34.716571 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.716593 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:34.716636 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:34.735252 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:34.735339 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:34.755152 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.755181 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:34.755230 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:34.773441 2163332 logs.go:282] 0 containers: []
	W0804 10:06:34.773472 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:34.773488 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:34.773500 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:34.793528 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:34.793556 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:34.812435 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:34.812465 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:34.837875 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:34.837905 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:34.858757 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:34.858786 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:34.878587 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:34.878614 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:34.916360 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:34.916391 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:34.982416 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:34.982452 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:35.039762 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:35.031976    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.032521    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034096    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034545    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.036090    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:35.031976    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.032521    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034096    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.034545    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:35.036090    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:35.039782 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:35.039796 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:35.066299 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:35.066330 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:35.104670 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:35.104700 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:37.656360 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:37.656872 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:37.656969 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:37.675825 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:37.675894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	W0804 10:06:38.886603 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:06:37.694962 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:37.695023 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:37.712658 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.712684 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:37.712735 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:37.730728 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:37.730800 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:37.748576 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.748598 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:37.748640 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:37.767923 2163332 logs.go:282] 2 containers: [5321aae275b7 69f71bfef17b]
	I0804 10:06:37.768007 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:37.785275 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.785298 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:37.785347 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:37.801999 2163332 logs.go:282] 0 containers: []
	W0804 10:06:37.802024 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:37.802055 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:37.802067 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:37.839050 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:37.839076 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:37.907098 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:37.907134 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:37.962875 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:37.955444    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.955922    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957526    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957895    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.959476    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:37.955444    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.955922    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957526    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.957895    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:37.959476    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:37.962896 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:37.962916 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:37.988976 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:37.989004 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:38.011096 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:38.011124 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:38.049631 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:38.049661 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:38.102092 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:38.102126 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:38.124479 2163332 logs.go:123] Gathering logs for kube-controller-manager [69f71bfef17b] ...
	I0804 10:06:38.124506 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f71bfef17b"
	I0804 10:06:38.144973 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:38.145000 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:38.170919 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:38.170951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:40.690387 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:40.690843 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:40.690940 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:40.710160 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:40.710230 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:40.727856 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:40.727940 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:40.745578 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.745605 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:40.745648 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:40.763453 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:40.763516 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:40.781764 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.781788 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:40.781839 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:40.799938 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:40.800013 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:40.817161 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.817187 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:40.817260 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:40.835239 2163332 logs.go:282] 0 containers: []
	W0804 10:06:40.835260 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:40.835279 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:40.835293 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:40.855149 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:40.855177 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:40.922877 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:40.922913 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:40.978296 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:40.970913    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.971466    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973009    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973412    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.974964    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:40.970913    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.971466    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973009    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.973412    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:40.974964    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:40.978318 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:40.978339 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:41.004175 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:41.004205 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:41.025025 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:41.025053 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:41.061373 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:41.061413 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:41.087250 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:41.087278 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:41.107920 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:41.107947 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:41.148907 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:41.148937 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	W0804 10:06:41.383817 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:43.384045 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:43.699853 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:43.700314 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:43.700416 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:43.719695 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:43.719771 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:43.738313 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:43.738403 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:43.756507 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.756531 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:43.756574 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:43.775263 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:43.775363 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:43.793071 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.793109 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:43.793177 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:43.811134 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:43.811231 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:43.828955 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.828978 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:43.829038 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:43.847773 2163332 logs.go:282] 0 containers: []
	W0804 10:06:43.847793 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:43.847819 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:43.847831 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:43.873624 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:43.873653 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:43.894310 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:43.894337 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:43.945563 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:43.945599 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:43.966435 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:43.966465 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:43.984864 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:43.984889 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:44.024156 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:44.024192 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:44.060624 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:44.060652 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:44.125956 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:44.125999 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:44.152471 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:44.152508 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:44.207960 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:44.200436    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.200919    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202422    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202839    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.204356    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:44.200436    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.200919    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202422    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.202839    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:44.204356    5903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:46.709332 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:46.709781 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:46.709868 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:46.729464 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:46.729567 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:46.748548 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:46.748644 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:46.766962 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.766986 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:46.767041 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:46.786525 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:46.786603 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:46.804285 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.804311 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:46.804360 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:46.822116 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:46.822209 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:46.839501 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.839530 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:46.839575 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:46.856689 2163332 logs.go:282] 0 containers: []
	W0804 10:06:46.856711 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:46.856728 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:46.856739 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:46.895336 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:46.895370 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:46.946627 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:46.946659 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:46.967302 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:46.967329 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:46.985945 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:46.985972 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:47.022376 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:47.022405 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:47.077558 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:47.069979    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.070438    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072002    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072443    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.074016    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:47.069979    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.070438    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072002    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.072443    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:47.074016    6059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:47.077593 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:47.077609 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:47.097426 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:47.097453 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:47.160540 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:47.160577 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:47.186584 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:47.186612 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:06:45.883271 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:47.883345 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:49.883713 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:49.713880 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:49.714344 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:49.714431 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:49.732944 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:49.733002 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:49.751052 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:49.751129 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:49.769185 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.769207 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:49.769272 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:49.787184 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:49.787250 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:49.804791 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.804809 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:49.804849 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:49.823604 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:49.823673 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:49.840745 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.840766 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:49.840809 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:49.857681 2163332 logs.go:282] 0 containers: []
	W0804 10:06:49.857709 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:49.857729 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:49.857743 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:49.908402 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:49.908439 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:49.930280 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:49.930305 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:49.950867 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:49.950895 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:50.018519 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:50.018562 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:50.044619 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:50.044647 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:50.100753 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:50.092922    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.093459    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095094    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095578    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.097081    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:50.092922    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.093459    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095094    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.095578    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:50.097081    6217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:50.100777 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:50.100793 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:50.125943 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:50.125970 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:50.146091 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:50.146117 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:50.181714 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:50.181742 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	W0804 10:06:52.383197 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:54.383379 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:52.721516 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:52.721956 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:52.722053 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:52.741758 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:52.741819 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:52.760560 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:52.760637 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:52.778049 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.778071 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:52.778133 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:52.796442 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:52.796515 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:52.813403 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.813433 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:52.813486 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:52.831370 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:52.831443 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:52.850355 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.850377 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:52.850418 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:52.868304 2163332 logs.go:282] 0 containers: []
	W0804 10:06:52.868329 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:52.868348 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:52.868362 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:52.909679 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:52.909712 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:52.959826 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:52.959860 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:52.980766 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:52.980792 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:53.000093 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:53.000123 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:53.066024 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:53.066063 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:53.122172 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:53.114825    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.115397    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.116943    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.117412    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.118938    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:53.114825    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.115397    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.116943    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.117412    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:53.118938    6410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:53.122200 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:53.122218 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:53.158613 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:53.158651 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:53.184392 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:53.184422 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:53.209845 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:53.209873 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:55.732938 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:55.733375 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:55.733476 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:55.752276 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:55.752356 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:55.770674 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:55.770750 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:55.788757 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.788778 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:55.788823 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:55.806924 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:55.806986 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:55.824084 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.824105 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:55.824163 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:55.842106 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:55.842195 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:55.859348 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.859376 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:55.859429 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:55.876943 2163332 logs.go:282] 0 containers: []
	W0804 10:06:55.876972 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:55.876990 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:55.877001 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:55.903338 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:55.903372 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:55.924802 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:55.924829 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:55.980125 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:55.972792    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.973342    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.974941    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.975429    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.976926    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:55.972792    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.973342    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.974941    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.975429    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:55.976926    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:55.980146 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:55.980161 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:06:56.000597 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:56.000622 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:56.037964 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:56.037996 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:56.088371 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:56.088407 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:56.107606 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:56.107634 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:56.143658 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:56.143689 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:56.211928 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:56.211963 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 10:06:56.383880 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:06:58.883846 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:06:58.738791 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:06:58.739253 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:06:58.739345 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:06:58.758672 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:06:58.758750 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:06:58.778125 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:06:58.778188 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:06:58.795601 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.795623 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:06:58.795675 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:06:58.814211 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:06:58.814275 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:06:58.831764 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.831790 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:06:58.831834 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:06:58.849466 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:06:58.849539 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:06:58.867398 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.867427 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:06:58.867484 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:06:58.885191 2163332 logs.go:282] 0 containers: []
	W0804 10:06:58.885215 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:06:58.885234 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:06:58.885262 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:06:58.911583 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:06:58.911610 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:06:58.950860 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:06:58.950893 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:06:59.004297 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:06:59.004333 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:06:59.025861 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:06:59.025889 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:06:59.046944 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:06:59.046973 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:06:59.085764 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:06:59.085794 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:06:59.158468 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:06:59.158508 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:06:59.184434 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:06:59.184462 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:06:59.239706 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:06:59.232043    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.232545    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234123    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234548    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.235973    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:06:59.232043    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.232545    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234123    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.234548    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:06:59.235973    6800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:06:59.239735 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:06:59.239748 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:01.760780 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:01.761288 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:01.761386 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:01.781655 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:01.781741 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:01.799466 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:01.799533 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:01.817102 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.817126 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:01.817181 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:01.834957 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:01.835044 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:01.852872 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.852900 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:01.852951 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:01.870948 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:01.871014 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:01.890001 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.890026 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:01.890072 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:01.907730 2163332 logs.go:282] 0 containers: []
	W0804 10:07:01.907750 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:01.907767 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:01.907777 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:01.980222 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:01.980260 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:02.006847 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:02.006888 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:02.047297 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:02.047329 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:02.101227 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:02.101276 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:02.124099 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:02.124129 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:02.161273 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:02.161308 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:02.187147 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:02.187182 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:02.242852 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:02.235381    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.235858    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237451    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237924    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.239421    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:02.235381    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.235858    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237451    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.237924    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:02.239421    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:02.242879 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:02.242893 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:02.264021 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:02.264048 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:07:01.383265 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:03.883186 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:04.785494 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:04.785952 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:04.786043 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:04.805356 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:04.805452 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:04.823966 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:04.824039 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:04.841949 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.841973 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:04.842019 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:04.859692 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:04.859761 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:04.877317 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.877341 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:04.877383 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:04.895958 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:04.896035 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:04.913348 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.913378 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:04.913426 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:04.931401 2163332 logs.go:282] 0 containers: []
	W0804 10:07:04.931427 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:04.931448 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:04.931461 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:04.951477 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:04.951507 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:05.001983 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:05.002019 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:05.023585 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:05.023619 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:05.044516 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:05.044549 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:05.113154 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:05.113195 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:05.170412 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:05.162898    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.163461    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165001    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165501    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.167026    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:05.162898    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.163461    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165001    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.165501    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:05.167026    7139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:05.170434 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:05.170447 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:05.210151 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:05.210186 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:05.248755 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:05.248781 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:05.275317 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:05.275352 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:07:05.883315 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:07.884030 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:10.383933 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:07.801587 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:07.802063 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:07.802166 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:07.821137 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:07.821214 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:07.839463 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:07.839532 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:07.856871 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.856893 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:07.856938 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:07.875060 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:07.875136 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:07.896448 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.896477 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:07.896537 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:07.914334 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:07.914402 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:07.931616 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.931638 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:07.931680 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:07.950247 2163332 logs.go:282] 0 containers: []
	W0804 10:07:07.950268 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:07.950285 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:07.950295 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:07.974572 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:07.974603 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:07.994800 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:07.994827 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:08.013535 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:08.013565 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:08.048711 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:08.048738 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:08.075000 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:08.075029 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:08.095656 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:08.095681 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:08.135706 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:08.135742 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:08.189749 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:08.189780 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:08.264988 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:08.265028 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:08.321799 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:08.314236    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.314718    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316206    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316648    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.318128    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:08.314236    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.314718    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316206    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.316648    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:08.318128    7360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:10.822388 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:10.822855 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:10.822962 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:10.842220 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:10.842299 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:10.860390 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:10.860467 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:10.878544 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.878567 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:10.878613 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:10.897953 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:10.898016 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:10.916393 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.916419 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:10.916474 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:10.933957 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:10.934052 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:10.951873 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.951901 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:10.951957 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:10.970046 2163332 logs.go:282] 0 containers: []
	W0804 10:07:10.970073 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:10.970101 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:10.970116 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:11.026141 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:11.018729    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.019305    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.020844    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.021228    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.022826    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:11.018729    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.019305    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.020844    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.021228    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:11.022826    7464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:11.026162 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:11.026174 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:11.052155 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:11.052183 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:11.091637 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:11.091670 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:11.142651 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:11.142684 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:11.164003 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:11.164034 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:11.200186 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:11.200214 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:11.270805 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:11.270846 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:11.297260 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:11.297295 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:11.318423 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:11.318449 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:07:12.883177 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:15.383259 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:13.838395 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:13.838840 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:13.838937 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:13.858880 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:13.858955 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:13.877417 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:13.877476 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:13.895850 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.895876 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:13.895919 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:13.914237 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:13.914304 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:13.932185 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.932214 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:13.932265 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:13.949806 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:13.949876 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:13.966753 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.966779 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:13.966837 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:13.984061 2163332 logs.go:282] 0 containers: []
	W0804 10:07:13.984080 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:13.984103 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:13.984118 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:14.024518 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:14.024551 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:14.075810 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:14.075839 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:14.096801 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:14.096835 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:14.134271 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:14.134298 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:14.210356 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:14.210398 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:14.266888 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:14.259329    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.259828    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.261517    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.262045    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.263609    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:14.259329    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.259828    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.261517    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.262045    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:14.263609    7690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:14.266911 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:14.266931 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:14.286729 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:14.286765 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:14.312819 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:14.312853 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:14.339716 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:14.339746 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:16.861870 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:16.862360 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:16.862459 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:16.882051 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:16.882134 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:16.900321 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:16.900401 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:16.917983 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.918006 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:16.918057 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:16.935570 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:16.935650 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:16.953434 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.953455 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:16.953497 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:16.971207 2163332 logs.go:282] 1 containers: [5321aae275b7]
	I0804 10:07:16.971281 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:16.989882 2163332 logs.go:282] 0 containers: []
	W0804 10:07:16.989911 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:16.989957 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:17.006985 2163332 logs.go:282] 0 containers: []
	W0804 10:07:17.007007 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:17.007022 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:17.007034 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:17.081700 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:17.081741 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:17.107769 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:17.107798 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	I0804 10:07:17.129048 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:17.129074 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:17.170571 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:17.170601 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	I0804 10:07:17.190971 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:17.191000 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:17.227194 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:17.227225 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:17.283198 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:17.275311    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.275794    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277411    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277858    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.279344    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:17.275311    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.275794    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277411    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.277858    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:17.279344    7877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:17.283220 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:17.283236 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	I0804 10:07:17.309760 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:17.309789 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:17.358841 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:17.358871 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:07:17.383386 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:19.383988 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:19.880139 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:19.880622 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:19.880709 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:19.901098 2163332 logs.go:282] 1 containers: [806e7ebaaed1]
	I0804 10:07:19.901189 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:19.921388 2163332 logs.go:282] 1 containers: [62ad65a28324]
	I0804 10:07:19.921455 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:19.941720 2163332 logs.go:282] 0 containers: []
	W0804 10:07:19.941751 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:19.941808 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:19.963719 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:19.963807 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:19.982285 2163332 logs.go:282] 0 containers: []
	W0804 10:07:19.982315 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:19.982375 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:20.005165 2163332 logs.go:282] 2 containers: [db8e2ca87b17 5321aae275b7]
	I0804 10:07:20.005272 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:20.024272 2163332 logs.go:282] 0 containers: []
	W0804 10:07:20.024296 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:20.024349 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:20.066617 2163332 logs.go:282] 0 containers: []
	W0804 10:07:20.066648 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:20.066662 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:20.066674 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:21.883344 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:23.883950 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:26.383273 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:28.383629 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:30.384083 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:32.883295 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:34.883588 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:37.383240 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:39.383490 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:41.805018 2163332 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.738325489s)
	W0804 10:07:41.805054 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:30.119105    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:40.119975    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:41.799069    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:59078->[::1]:8443: read: connection reset by peer"
	E0804 10:07:41.799640    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:41.801276    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:30.119105    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:40.119975    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": net/http: TLS handshake timeout"
	E0804 10:07:41.799069    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:59078->[::1]:8443: read: connection reset by peer"
	E0804 10:07:41.799640    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:41.801276    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:41.805062 2163332 logs.go:123] Gathering logs for etcd [62ad65a28324] ...
	I0804 10:07:41.805073 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62ad65a28324"
	W0804 10:07:41.824568 2163332 logs.go:130] failed etcd [62ad65a28324]: command: /bin/bash -c "docker logs --tail 400 62ad65a28324" /bin/bash -c "docker logs --tail 400 62ad65a28324": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 62ad65a28324
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 62ad65a28324
	
	** /stderr **
	I0804 10:07:41.824590 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:41.824606 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:41.866655 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:41.866687 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:41.918542 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:41.918580 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:41.940196 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:41.940228 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:41.980124 2163332 logs.go:123] Gathering logs for kube-apiserver [806e7ebaaed1] ...
	I0804 10:07:41.980151 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 806e7ebaaed1"
	W0804 10:07:41.999188 2163332 logs.go:130] failed kube-apiserver [806e7ebaaed1]: command: /bin/bash -c "docker logs --tail 400 806e7ebaaed1" /bin/bash -c "docker logs --tail 400 806e7ebaaed1": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 806e7ebaaed1
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 806e7ebaaed1
	
	** /stderr **
	I0804 10:07:41.999208 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:41.999222 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:42.021383 2163332 logs.go:123] Gathering logs for kube-controller-manager [5321aae275b7] ...
	I0804 10:07:42.021413 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5321aae275b7"
	W0804 10:07:42.040097 2163332 logs.go:130] failed kube-controller-manager [5321aae275b7]: command: /bin/bash -c "docker logs --tail 400 5321aae275b7" /bin/bash -c "docker logs --tail 400 5321aae275b7": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 5321aae275b7
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 5321aae275b7
	
	** /stderr **
	I0804 10:07:42.040121 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:42.040140 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:42.121467 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:42.121517 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 10:07:41.384132 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:43.883489 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:44.649035 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:44.649550 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:44.649655 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:44.668446 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:44.668531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:44.686095 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:44.686171 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:44.705643 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.705669 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:44.705736 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:44.724574 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:44.724643 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:44.743534 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.743556 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:44.743599 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:44.762338 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:44.762422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:44.782440 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.782464 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:44.782511 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:44.800457 2163332 logs.go:282] 0 containers: []
	W0804 10:07:44.800482 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:44.800503 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:44.800519 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:44.828987 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:44.829024 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:44.851349 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:44.851380 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:44.891887 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:44.891921 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:44.942771 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:44.942809 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:44.963910 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:44.963936 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:44.982991 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:44.983018 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:45.019697 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:45.019724 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:45.098143 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:45.098181 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:45.156899 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:45.149340    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.149889    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151529    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151954    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.153458    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:45.149340    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.149889    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151529    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.151954    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:45.153458    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:45.156923 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:45.156936 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:47.685272 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:47.685730 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:47.685821 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W0804 10:07:45.884049 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:48.383460 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:50.384087 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:47.705698 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:47.705776 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:47.723486 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:47.723559 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:47.740254 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.740277 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:47.740328 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:47.758844 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:47.758912 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:47.776147 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.776169 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:47.776209 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:47.794049 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:47.794120 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:47.810872 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.810892 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:47.810933 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:47.828618 2163332 logs.go:282] 0 containers: []
	W0804 10:07:47.828639 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:47.828655 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:47.828665 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:47.884561 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:47.876612    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.877177    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.878713    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.879149    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.880641    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:47.876612    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.877177    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.878713    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.879149    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:47.880641    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:47.884591 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:47.884608 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:47.910602 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:47.910632 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:47.931635 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:47.931662 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:47.974664 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:47.974698 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:48.026673 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:48.026707 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:48.047596 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:48.047624 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:48.084322 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:48.084354 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:48.162716 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:48.162754 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:48.189072 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:48.189103 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:50.709307 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:50.709704 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:50.709797 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:50.728631 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:50.728711 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:50.747056 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:50.747128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:50.764837 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.764861 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:50.764907 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:50.783351 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:50.783422 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:50.801048 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.801068 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:50.801112 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:50.819524 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:50.819605 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:50.837558 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.837583 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:50.837635 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:50.855272 2163332 logs.go:282] 0 containers: []
	W0804 10:07:50.855300 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:50.855315 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:50.855334 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:50.875612 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:50.875640 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:50.895850 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:50.895876 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:50.976003 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:50.976045 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:51.002688 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:51.002724 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:51.045612 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:51.045644 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:51.098299 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:51.098331 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:51.135309 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:51.135342 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:51.191580 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:51.183846    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.184481    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186082    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186483    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.188015    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:51.183846    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.184481    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186082    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.186483    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:51.188015    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:51.191601 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:51.191615 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:51.218895 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:51.218923 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	W0804 10:07:52.883308 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:54.883712 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:53.739326 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:53.739815 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:53.739915 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:53.760078 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:53.760152 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:53.778771 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:53.778848 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:53.796996 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.797026 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:53.797075 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:53.815962 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:53.816032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:53.833919 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.833942 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:53.833991 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:53.852829 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:53.852894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:53.870544 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.870572 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:53.870620 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:53.888900 2163332 logs.go:282] 0 containers: []
	W0804 10:07:53.888923 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:53.888941 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:53.888954 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:53.909456 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:53.909482 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:53.959416 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:53.959451 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:53.979376 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:53.979406 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:54.015365 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:54.015393 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:07:54.092580 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:54.092627 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:54.119325 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:54.119436 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:54.178242 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:54.170338    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.171010    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172560    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172976    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.174509    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:54.170338    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.171010    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172560    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.172976    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:54.174509    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:54.178266 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:54.178288 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:54.205571 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:54.205602 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:54.226781 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:54.226811 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:56.772513 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:56.773019 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:56.773137 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:56.792596 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:56.792666 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:56.810823 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:56.810896 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:56.828450 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.828480 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:56.828532 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:56.847167 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:56.847237 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:56.866291 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.866315 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:56.866358 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:56.884828 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:56.884907 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:56.905059 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.905088 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:56.905134 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:56.923381 2163332 logs.go:282] 0 containers: []
	W0804 10:07:56.923417 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:56.923435 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:07:56.923447 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:07:56.943931 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:07:56.943957 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:07:56.986803 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:56.986835 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:57.013326 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:57.013360 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:07:57.068200 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:07:57.060866    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.061398    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.062981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.063498    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.064981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:07:57.060866    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.061398    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.062981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.063498    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:07:57.064981    9218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:07:57.068220 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:07:57.068232 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:07:57.093915 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:07:57.093943 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:07:57.144935 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:07:57.144969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:07:57.166788 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:07:57.166813 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:07:57.188225 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:07:57.188254 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:07:57.224405 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:07:57.224433 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 10:07:56.883778 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:07:59.383176 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:07:59.805597 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:07:59.806058 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:07:59.806152 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:07:59.824866 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:07:59.824944 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:07:59.843663 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:07:59.843753 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:07:59.861286 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.861306 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:07:59.861356 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:07:59.880494 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:07:59.880573 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:07:59.898827 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.898851 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:07:59.898894 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:07:59.917517 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:07:59.917584 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:07:59.935879 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.935906 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:07:59.935963 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:07:59.954233 2163332 logs.go:282] 0 containers: []
	W0804 10:07:59.954264 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:07:59.954284 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:07:59.954302 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:07:59.980238 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:07:59.980271 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:00.037175 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:00.029528    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.030067    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.031620    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.032023    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.033553    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:00.029528    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.030067    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.031620    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.032023    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:00.033553    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:00.037200 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:00.037215 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:00.079854 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:00.079889 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:00.117813 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:00.117842 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:00.199625 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:00.199671 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:00.225938 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:00.225969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:00.246825 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:00.246857 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:00.300311 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:00.300362 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:00.322075 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:00.322105 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:08:01.383269 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:02.842602 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:02.843031 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:02.843128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:02.862419 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:02.862503 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:02.881322 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:02.881409 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:02.902962 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.902986 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:02.903039 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:02.922238 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:02.922315 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:02.940312 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.940340 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:02.940391 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:02.960494 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:02.960580 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:02.978877 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.978915 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:02.978977 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:02.996894 2163332 logs.go:282] 0 containers: []
	W0804 10:08:02.996918 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:02.996937 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:02.996951 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:03.060369 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:03.060412 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:03.100294 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:03.100320 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:03.128232 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:03.128269 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:03.149215 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:03.149276 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:03.168809 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:03.168839 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:03.244969 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:03.245019 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:03.302519 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:03.294536    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.295054    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.296664    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.297129    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.298652    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:03.294536    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.295054    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.296664    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.297129    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:03.298652    9598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:03.302541 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:03.302555 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:03.328592 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:03.328621 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:03.349409 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:03.349436 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:05.892519 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:05.892926 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:05.893018 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:05.912863 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:05.912930 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:05.931765 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:05.931842 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:05.949624 2163332 logs.go:282] 0 containers: []
	W0804 10:08:05.949651 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:05.949706 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:05.969017 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:05.969096 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:05.987253 2163332 logs.go:282] 0 containers: []
	W0804 10:08:05.987279 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:05.987338 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:06.006096 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:06.006174 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:06.023866 2163332 logs.go:282] 0 containers: []
	W0804 10:08:06.023898 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:06.023955 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:06.041554 2163332 logs.go:282] 0 containers: []
	W0804 10:08:06.041574 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:06.041592 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:06.041603 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:06.078088 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:06.078114 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:06.160862 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:06.160907 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:06.187395 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:06.187425 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:06.243359 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:06.235931    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.236430    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.237921    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.238444    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.239969    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:06.235931    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.236430    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.237921    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.238444    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:06.239969    9757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:06.243387 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:06.243404 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:06.269689 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:06.269719 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:06.290404 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:06.290435 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:06.310595 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:06.310619 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:06.330304 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:06.330331 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:06.372930 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:06.372969 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:08.923937 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:08.924354 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:08.924450 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:08.943688 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:08.943758 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:08.963008 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:08.963079 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:08.981372 2163332 logs.go:282] 0 containers: []
	W0804 10:08:08.981400 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:08.981453 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:08.999509 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:08.999592 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:09.017857 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.017881 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:09.017930 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:09.036581 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:09.036643 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:09.054584 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.054613 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:09.054666 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:09.072888 2163332 logs.go:282] 0 containers: []
	W0804 10:08:09.072924 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:09.072949 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:09.072965 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:09.149606 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:09.149645 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:09.178148 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:09.178185 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:09.222507 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:09.222544 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:09.275195 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:09.275235 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:09.299125 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:09.299159 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:09.319703 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:09.319747 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:09.346880 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:09.346922 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:09.404327 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:09.396630    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.397126    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.398704    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.399191    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.400813    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:09.396630    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.397126    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.398704    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.399191    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:09.400813    9961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:09.404352 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:09.404367 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:09.425425 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:09.425452 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:11.963472 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:11.963939 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:11.964032 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:11.983012 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:11.983080 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:12.001567 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:12.001629 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:12.019335 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.019361 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:12.019428 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:12.038818 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:12.038893 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:12.056951 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.056978 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:12.057022 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:12.075232 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:12.075305 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:12.092737 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.092758 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:12.092800 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:12.109994 2163332 logs.go:282] 0 containers: []
	W0804 10:08:12.110024 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:12.110044 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:12.110055 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:12.166801 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:12.158687   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.159257   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.160910   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.161382   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.162961   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:12.158687   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.159257   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.160910   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.161382   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:12.162961   10091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:12.166825 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:12.166842 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:12.192505 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:12.192533 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:12.213260 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:12.213294 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:12.234230 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:12.234264 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:12.254032 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:12.254068 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:12.336496 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:12.336538 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:12.362829 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:12.362860 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:12.404783 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:12.404822 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:12.456932 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:12.456963 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:08:12.885483 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	I0804 10:08:14.998006 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:14.998459 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:14.998558 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:15.018639 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:15.018726 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:15.037594 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:15.037664 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:15.055647 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.055675 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:15.055720 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:15.073464 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:15.073538 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:15.091563 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.091588 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:15.091636 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:15.110381 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:15.110457 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:15.128744 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.128766 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:15.128811 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:15.147315 2163332 logs.go:282] 0 containers: []
	W0804 10:08:15.147336 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:15.147350 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:15.147369 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:15.167872 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:15.167908 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:15.211657 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:15.211690 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:15.233001 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:15.233026 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:15.252541 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:15.252580 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:15.291017 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:15.291044 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:15.316967 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:15.317004 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:15.343514 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:15.343543 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:15.394164 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:15.394201 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:15.475808 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:15.475847 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:15.532790 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:15.525410   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.525962   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527526   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527890   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.529344   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:15.525410   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.525962   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527526   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.527890   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:15.529344   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:18.033614 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:18.034099 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:18.034190 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:18.053426 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:18.053519 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:18.072396 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:18.072461 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:18.090428 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.090453 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:18.090519 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:18.109580 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:18.109661 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:18.127869 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.127899 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:18.127954 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:18.146622 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:18.146695 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:18.165973 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.165995 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:18.166038 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:18.183152 2163332 logs.go:282] 0 containers: []
	W0804 10:08:18.183175 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:18.183190 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:18.183204 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:18.239841 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:18.232099   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.232612   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234166   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234591   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.236113   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:18.232099   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.232612   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234166   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.234591   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:18.236113   10448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:18.239862 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:18.239874 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:18.260920 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:18.260946 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:18.304135 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:18.304170 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:18.356641 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:18.356679 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:18.376311 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:18.376341 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:18.460920 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:18.460965 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:18.488725 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:18.488755 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:18.509858 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:18.509894 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:18.546219 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:18.546248 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:21.073317 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:21.073860 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:21.073971 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:21.093222 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:21.093346 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:21.111951 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:21.112042 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:21.130287 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.130308 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:21.130359 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:21.148384 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:21.148471 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:21.166576 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.166604 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:21.166652 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:21.185348 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:21.185427 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:21.203596 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.203622 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:21.203681 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:21.221592 2163332 logs.go:282] 0 containers: []
	W0804 10:08:21.221620 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:21.221640 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:21.221652 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:21.277441 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:21.269692   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.270305   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.271725   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.272213   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.273739   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:21.269692   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.270305   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.271725   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.272213   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:21.273739   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:21.277466 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:21.277482 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:21.298481 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:21.298511 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:21.350381 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:21.350418 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:21.371474 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:21.371501 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:21.408284 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:21.408313 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:21.485994 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:21.486031 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:21.512310 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:21.512339 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:21.539196 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:21.539228 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:21.581887 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:21.581920 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0804 10:08:22.886436 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": net/http: TLS handshake timeout
	W0804 10:08:25.383211 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:24.102885 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:24.103356 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:24.103454 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:24.123078 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:24.123144 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:24.141483 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:24.141545 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:24.159538 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.159565 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:24.159610 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:24.177499 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:24.177574 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:24.195218 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.195246 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:24.195289 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:24.213410 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:24.213501 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:24.231595 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.231619 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:24.231675 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:24.250451 2163332 logs.go:282] 0 containers: []
	W0804 10:08:24.250478 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:24.250497 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:24.250511 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:24.269653 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:24.269681 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:24.348982 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:24.349027 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:24.405452 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:24.397972   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.398529   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400132   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400600   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.402109   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:24.397972   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.398529   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400132   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.400600   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:24.402109   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:24.405476 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:24.405491 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:24.431565 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:24.431593 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:24.469920 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:24.469948 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:24.495911 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:24.495942 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:24.516767 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:24.516796 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:24.559809 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:24.559846 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:24.612215 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:24.612251 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:27.134399 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:27.134902 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:27.135016 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:27.154460 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:27.154526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:27.172467 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:27.172537 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:27.190547 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.190571 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:27.190626 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:27.208406 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:27.208478 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:27.226270 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.226293 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:27.226347 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:27.244648 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:27.244710 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:27.262363 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.262384 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:27.262429 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:27.280761 2163332 logs.go:282] 0 containers: []
	W0804 10:08:27.280791 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:27.280811 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:27.280828 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:27.337516 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:27.329752   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.330367   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.331865   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.332331   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.333862   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:27.329752   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.330367   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.331865   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.332331   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:27.333862   10991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:27.337538 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:27.337554 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:27.383205 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:27.383237 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:27.402831 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:27.402863 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:27.439987 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:27.440016 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:27.467188 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:27.467220 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:27.488626 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:27.488651 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:27.538307 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:27.538341 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:27.558848 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:27.558875 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:27.640317 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:27.640360 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 10:08:27.383261 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:29.883318 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:30.169015 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:30.169492 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:30.169591 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:30.188919 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:30.189000 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:30.208903 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:30.208986 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:30.226974 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.227006 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:30.227061 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:30.245555 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:30.245625 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:30.263987 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.264013 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:30.264059 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:30.282944 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:30.283023 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:30.301744 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.301773 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:30.301834 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:30.320893 2163332 logs.go:282] 0 containers: []
	W0804 10:08:30.320919 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:30.320936 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:30.320951 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:30.397888 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:30.397925 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:30.418812 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:30.418837 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:30.464089 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:30.464123 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:30.484745 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:30.484778 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:30.504805 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:30.504837 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:30.530475 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:30.530511 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:30.586445 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:30.578622   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.579233   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.580788   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.581197   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.582760   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:30.578622   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.579233   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.580788   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.581197   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:30.582760   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:30.586465 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:30.586478 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:30.613024 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:30.613054 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:30.666024 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:30.666060 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:08:31.883721 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:34.383160 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:33.203579 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:33.204060 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:33.204180 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:33.223272 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:33.223341 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:33.242111 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:33.242191 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:33.260564 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.260587 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:33.260632 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:33.279120 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:33.279198 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:33.297558 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.297581 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:33.297626 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:33.315911 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:33.315987 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:33.334504 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.334534 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:33.334594 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:33.352831 2163332 logs.go:282] 0 containers: []
	W0804 10:08:33.352855 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:33.352876 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:33.352891 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:33.431146 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:33.431188 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:33.457483 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:33.457516 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:33.512587 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:33.505280   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.505794   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507387   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507829   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.509409   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:33.505280   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.505794   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507387   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.507829   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:33.509409   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:33.512614 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:33.512630 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:33.563154 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:33.563186 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:33.584703 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:33.584730 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:33.603831 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:33.603862 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:33.641549 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:33.641579 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:33.667027 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:33.667056 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:33.688258 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:33.688291 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:36.234388 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:36.234842 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:36.234932 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:36.253452 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:36.253531 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:36.272517 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:36.272578 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:36.290793 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.290815 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:36.290859 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:36.309868 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:36.309951 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:36.328038 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.328065 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:36.328128 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:36.346447 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:36.346526 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:36.364698 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.364720 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:36.364774 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:36.382618 2163332 logs.go:282] 0 containers: []
	W0804 10:08:36.382649 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:36.382672 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:36.382687 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:36.460757 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:36.460795 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:36.517181 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:36.509281   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.509826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511400   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.513375   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:36.509281   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.509826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511400   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.511826   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:36.513375   11540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:36.517202 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:36.517218 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:36.570857 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:36.570896 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:36.590896 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:36.590929 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:36.616290 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:36.616323 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:36.643271 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:36.643298 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:36.663678 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:36.663704 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:36.708665 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:36.708695 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:36.729524 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:36.729551 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 10:08:36.383928 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:38.883516 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:08:39.267469 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:39.267990 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:39.268120 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:39.287780 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:39.287877 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:39.307153 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:39.307248 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:39.326719 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.326752 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:39.326810 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:39.345319 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:39.345387 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:39.363424 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.363455 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:39.363511 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:39.381746 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:39.381825 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:39.399785 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.399809 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:39.399862 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:39.419064 2163332 logs.go:282] 0 containers: []
	W0804 10:08:39.419095 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:39.419121 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:39.419136 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:39.501950 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:39.501998 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:39.528491 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:39.528525 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:39.585466 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:39.578061   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.578577   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580045   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580462   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.581949   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:39.578061   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.578577   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580045   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.580462   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:39.581949   11719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:39.585497 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:39.585518 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:39.611559 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:39.611590 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:39.632402 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:39.632438 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:39.677721 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:39.677758 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:39.728453 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:39.728487 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:39.752029 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:39.752060 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:39.772376 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:39.772408 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:42.311175 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:42.311726 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:42.311836 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0804 10:08:42.331694 2163332 logs.go:282] 1 containers: [546ccc0d47d3]
	I0804 10:08:42.331761 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0804 10:08:42.350128 2163332 logs.go:282] 1 containers: [1f24d4315f70]
	I0804 10:08:42.350202 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0804 10:08:42.368335 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.368358 2163332 logs.go:284] No container was found matching "coredns"
	I0804 10:08:42.368411 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0804 10:08:42.385942 2163332 logs.go:282] 2 containers: [4d9bcb766848 89bc4723825b]
	I0804 10:08:42.386020 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0804 10:08:42.403768 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.403788 2163332 logs.go:284] No container was found matching "kube-proxy"
	I0804 10:08:42.403840 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0804 10:08:42.422612 2163332 logs.go:282] 1 containers: [db8e2ca87b17]
	I0804 10:08:42.422679 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0804 10:08:42.439585 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.439609 2163332 logs.go:284] No container was found matching "kindnet"
	I0804 10:08:42.439659 2163332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0804 10:08:42.457208 2163332 logs.go:282] 0 containers: []
	W0804 10:08:42.457229 2163332 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0804 10:08:42.457263 2163332 logs.go:123] Gathering logs for kubelet ...
	I0804 10:08:42.457279 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 10:08:42.535545 2163332 logs.go:123] Gathering logs for dmesg ...
	I0804 10:08:42.535578 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 10:08:42.561612 2163332 logs.go:123] Gathering logs for describe nodes ...
	I0804 10:08:42.561641 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 10:08:42.616811 2163332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:08:42.609048   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.609673   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611215   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611642   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.613094   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0804 10:08:42.609048   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.609673   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611215   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.611642   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:08:42.613094   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 10:08:42.616832 2163332 logs.go:123] Gathering logs for kube-apiserver [546ccc0d47d3] ...
	I0804 10:08:42.616847 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 546ccc0d47d3"
	I0804 10:08:42.643211 2163332 logs.go:123] Gathering logs for kube-controller-manager [db8e2ca87b17] ...
	I0804 10:08:42.643240 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8e2ca87b17"
	I0804 10:08:42.663882 2163332 logs.go:123] Gathering logs for Docker ...
	I0804 10:08:42.663910 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0804 10:08:42.683025 2163332 logs.go:123] Gathering logs for container status ...
	I0804 10:08:42.683052 2163332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 10:08:42.722746 2163332 logs.go:123] Gathering logs for etcd [1f24d4315f70] ...
	I0804 10:08:42.722772 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f24d4315f70"
	I0804 10:08:42.743550 2163332 logs.go:123] Gathering logs for kube-scheduler [4d9bcb766848] ...
	I0804 10:08:42.743589 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9bcb766848"
	I0804 10:08:42.788986 2163332 logs.go:123] Gathering logs for kube-scheduler [89bc4723825b] ...
	I0804 10:08:42.789023 2163332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89bc4723825b"
	I0804 10:08:45.340596 2163332 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0804 10:08:45.341080 2163332 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0804 10:08:45.343076 2163332 out.go:201] 
	W0804 10:08:45.344232 2163332 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0804 10:08:45.344248 2163332 out.go:270] * 
	W0804 10:08:45.346020 2163332 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 10:08:45.347852 2163332 out.go:201] 
	W0804 10:08:40.883920 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:42.884060 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:45.384074 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:47.883235 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:50.383116 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:52.383162 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:54.383410 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:56.383810 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:08:58.883290 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:00.883650 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:03.383190 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:05.383617 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:07.384051 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:09.883346 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:11.883783 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:13.884208 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:16.383435 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:18.383891 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:20.883429 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:22.884027 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:25.383556 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:27.883164 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:29.883548 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:31.883955 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:34.383514 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:36.883247 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:38.883512 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:40.884109 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	W0804 10:09:43.383400 2149628 node_ready.go:55] error getting node "no-preload-499486" condition "Ready" status (will retry): Get "https://192.168.94.2:8443/api/v1/nodes/no-preload-499486": dial tcp 192.168.94.2:8443: connect: connection refused
	I0804 10:09:45.383376 2149628 node_ready.go:38] duration metric: took 6m0.000813638s for node "no-preload-499486" to be "Ready" ...
	I0804 10:09:45.385759 2149628 out.go:201] 
	W0804 10:09:45.386973 2149628 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W0804 10:09:45.386995 2149628 out.go:270] * 
	W0804 10:09:45.389624 2149628 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 10:09:45.390891 2149628 out.go:201] 
	
	
	==> Docker <==
	Aug 04 10:03:45 no-preload-499486 cri-dockerd[1365]: time="2025-08-04T10:03:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/faaa3a488dc04608657ace902b23aff9e53e1d14755fdf70c32d9c4a86ae6ec6/resolv.conf as [nameserver 192.168.94.1 search local southamerica-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Aug 04 10:03:45 no-preload-499486 dockerd[1060]: time="2025-08-04T10:03:45.970130751Z" level=info msg="ignoring event" container=fc533eec1834b08c163742338f45821b5f02c6c5578ebe0fa5487906728547c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:04:07 no-preload-499486 dockerd[1060]: time="2025-08-04T10:04:07.509440559Z" level=info msg="ignoring event" container=835331562e21d7f94c792e7e547dd630d261e361d3dbf1c95186b90631d45ab4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:04:08 no-preload-499486 dockerd[1060]: time="2025-08-04T10:04:08.536777903Z" level=info msg="ignoring event" container=6c7c3e8e5a5a316e53d6dfbe663ac4dca13a60be5ece3da5dc2247e32f82d17a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:04:08 no-preload-499486 dockerd[1060]: time="2025-08-04T10:04:08.805380544Z" level=info msg="ignoring event" container=465ed5c63105c622faf628dc45dffc004b55d09148a84a0c45ec2f8a27c97fbf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:04:39 no-preload-499486 dockerd[1060]: time="2025-08-04T10:04:39.818927796Z" level=info msg="ignoring event" container=0595640f46489eb8407e6e761b084aaf6097c9c319d96bc72e2a6da471c5d644 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:04:44 no-preload-499486 dockerd[1060]: time="2025-08-04T10:04:44.826174830Z" level=info msg="ignoring event" container=c53148ebe39d8e04e877760553c72fbbb0efca7dc09fc1550c0d193752988ad5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:04:46 no-preload-499486 dockerd[1060]: time="2025-08-04T10:04:46.743926255Z" level=info msg="ignoring event" container=c90ac788092b4d99962cf322dca6016fcbab4b4a8a55f82e1817c83b0f7d9215 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:28 no-preload-499486 dockerd[1060]: time="2025-08-04T10:05:28.445977627Z" level=info msg="ignoring event" container=624b9721d7e89385a14cf7a113afd2059fd713021c967546422f8d3e449b1c07 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:33 no-preload-499486 dockerd[1060]: time="2025-08-04T10:05:33.808565031Z" level=info msg="ignoring event" container=86926cfa626f66ab359d1d7b13dfaa8c7749178320dbff42dccd2306e7130172 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:05:39 no-preload-499486 dockerd[1060]: time="2025-08-04T10:05:39.468564300Z" level=info msg="ignoring event" container=7c4f93cb4bfbd43195edf99e929820bd4cd2ff17c1c7e1820fc35244264f90eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:06:39 no-preload-499486 dockerd[1060]: time="2025-08-04T10:06:39.443989985Z" level=info msg="ignoring event" container=b0de8a87430e54e04bae9e0fe793e3fda728c66cafdbbb857dfa8b70b7b849a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:06:41 no-preload-499486 dockerd[1060]: time="2025-08-04T10:06:41.920345198Z" level=info msg="ignoring event" container=95273882a0ba3beeec00a1ee16fc2e13f9dc7d28771bbf35eeed20bc1e617760 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:06:56 no-preload-499486 dockerd[1060]: time="2025-08-04T10:06:56.807457292Z" level=info msg="ignoring event" container=9ce95901ec688dadabbfeba65d8a96e0cd422aa6483ce4093631e0769ecec314 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:08:23 no-preload-499486 dockerd[1060]: time="2025-08-04T10:08:23.128503844Z" level=info msg="ignoring event" container=152aef9e02ab4ddae450a3b16f379f3b222a44743fca7913d5d483269f9dfc2b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:08:31 no-preload-499486 dockerd[1060]: time="2025-08-04T10:08:31.608495511Z" level=info msg="ignoring event" container=8fb3f2292ab14a56a1592fff79c30568329e27afc3d74f06f288f788a6b3c3a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:09:46 no-preload-499486 dockerd[1060]: time="2025-08-04T10:09:46.825802914Z" level=info msg="ignoring event" container=a810f701be18750d51044ccf9d9ff7fef305f901df6922bfca0f6a234ed1aa24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:11:34 no-preload-499486 dockerd[1060]: time="2025-08-04T10:11:34.510840058Z" level=info msg="ignoring event" container=472dcd03fe966df29d93b8c639b463faef262c9b90416aac5e23792b181bb14f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:11:36 no-preload-499486 dockerd[1060]: time="2025-08-04T10:11:36.385566691Z" level=info msg="ignoring event" container=823f70262bea3d5c7f4b24113caf89653caced8307fea734bda4d7fd9ee05224 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:14:49 no-preload-499486 dockerd[1060]: time="2025-08-04T10:14:49.818859868Z" level=info msg="ignoring event" container=170e383b72244e90a4b5a27759222438dfdb8d4a28ad9820bdb56232fd5d66e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:16:58 no-preload-499486 dockerd[1060]: time="2025-08-04T10:16:58.167680349Z" level=info msg="ignoring event" container=6ea0a675973d81dde80ae3a00c3d70b3770278bb2eb3abbd26498cec2d3752d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:17:04 no-preload-499486 dockerd[1060]: time="2025-08-04T10:17:04.676044247Z" level=info msg="ignoring event" container=80aef0e1e41b81cb0f8b058ed3f2dccceb3285abc8cabc20f2603666b99f4941 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:19:58 no-preload-499486 dockerd[1060]: time="2025-08-04T10:19:58.808527556Z" level=info msg="ignoring event" container=8b2dd847d5a1ff932688da26db8679de7d861c012cee032d9640c49d1eba8f5d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:22:26 no-preload-499486 dockerd[1060]: time="2025-08-04T10:22:26.155944898Z" level=info msg="ignoring event" container=8fae444e948796e6ad5ea1f211a7c937b7a2fddf282b4f5ebfd98f5d3cbbf42c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 10:22:27 no-preload-499486 dockerd[1060]: time="2025-08-04T10:22:27.260181207Z" level=info msg="ignoring event" container=72d2af0d249a1712e512c59b855d980cf09091db17d72b49f1e50c1244e733e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	72d2af0d249a1       9ad783615e1bc       About a minute ago   Exited              kube-controller-manager   13                  faaa3a488dc04       kube-controller-manager-no-preload-499486
	8fae444e94879       d85eea91cc41d       About a minute ago   Exited              kube-apiserver            13                  26755274d8951       kube-apiserver-no-preload-499486
	8b2dd847d5a1f       1e30c0b1e9b99       3 minutes ago        Exited              etcd                      13                  8e25ebb8a89d4       etcd-no-preload-499486
	f9db373fc015a       21d34a2aeacf5       19 minutes ago       Running             kube-scheduler            1                   5c8e648885840       kube-scheduler-no-preload-499486
	2a1c20b2ffee8       21d34a2aeacf5       25 minutes ago       Exited              kube-scheduler            0                   d2b1bfd452832       kube-scheduler-no-preload-499486
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0804 10:23:16.061415    6247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:23:16.061977    6247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:23:16.063587    6247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:23:16.064084    6247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0804 10:23:16.065790    6247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.003976] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-30ac57a033af
	[  +0.000006] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +3.807738] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000008] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.000000] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.251962] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-30ac57a033af
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-30ac57a033af
	[  +0.000007] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.000000] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +7.935446] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000007] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000034] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[  +0.003972] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-30ac57a033af
	[  +0.000005] ll header: 00000000: e6 55 e2 99 27 88 6e 2b dd 20 6e c3 08 00
	[ +23.237968] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 e9 0e 42 0b 64 08 06
	[  +0.000446] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 d5 e2 93 f6 db 08 06
	[Aug 4 10:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da a7 c8 ad 52 b3 08 06
	[  +0.000606] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff da d5 10 fe 4e 73 08 06
	
	
	==> etcd [8b2dd847d5a1] <==
	flag provided but not defined: -proxy-refresh-interval
	Usage:
	
	  etcd [flags]
	    Start an etcd server.
	
	  etcd --version
	    Show the version of etcd.
	
	  etcd -h | --help
	    Show the help information about etcd.
	
	  etcd --config-file
	    Path to the server configuration file. Note that if a configuration file is provided, other command line flags and environment variables will be ignored.
	
	  etcd gateway
	    Run the stateless pass-through etcd TCP connection forwarding proxy.
	
	  etcd grpc-proxy
	    Run the stateless etcd v3 gRPC L7 reverse proxy.
	
	
	
	==> kernel <==
	 10:23:16 up 1 day, 19:04,  0 users,  load average: 0.15, 0.15, 0.70
	Linux no-preload-499486 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [8fae444e9487] <==
	W0804 10:22:06.125166       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:22:06.125233       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 10:22:06.127435       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0804 10:22:06.133954       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0804 10:22:06.138919       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0804 10:22:06.138936       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 10:22:06.139147       1 instance.go:232] Using reconciler: lease
	W0804 10:22:06.139876       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0804 10:22:06.140013       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:22:07.126192       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:22:07.126204       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:22:07.141024       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:22:08.498968       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:22:08.711685       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:22:08.801570       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:22:11.326639       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:22:11.427590       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:22:11.650740       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:22:14.784552       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:22:14.959962       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:22:16.419424       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:22:20.853123       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:22:21.392653       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 10:22:22.719602       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 10:22:26.140198       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [72d2af0d249a] <==
	I0804 10:22:06.584862       1 serving.go:386] Generated self-signed cert in-memory
	I0804 10:22:07.222472       1 controllermanager.go:191] "Starting" version="v1.34.0-beta.0"
	I0804 10:22:07.222496       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 10:22:07.223811       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 10:22:07.223814       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 10:22:07.224141       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0804 10:22:07.224229       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0804 10:22:27.226767       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.94.2:8443/healthz\": dial tcp 192.168.94.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [2a1c20b2ffee] <==
	E0804 10:02:28.175749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:02:31.304161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.94.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:02:32.791509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:02:34.007548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.94.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 10:02:40.294146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.94.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:02:43.128115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.94.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 10:02:45.421355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 10:02:50.083757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.94.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 10:02:51.361497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.94.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 10:03:05.497126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.94.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 10:03:08.537516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.94.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 10:03:11.097373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 10:03:11.729593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.94.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 10:03:12.801646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.94.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:03:17.035915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:03:18.849345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.94.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 10:03:23.883368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.94.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 10:03:24.360764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 10:03:24.447406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:03:25.585024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.94.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:03:26.613910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.94.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 10:03:28.018647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.94.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:03:28.621818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.94.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 10:03:34.452113       1 server.go:274] "handlers are not fully synchronized" err="context canceled"
	E0804 10:03:34.452246       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f9db373fc015] <==
	E0804 10:22:17.988188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.94.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0804 10:22:19.129704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.94.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 10:22:20.217710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.94.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:22:20.861351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.94.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 10:22:25.473825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.94.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0804 10:22:27.145881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.94.2:46804->192.168.94.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 10:22:27.145881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.94.2:46768->192.168.94.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 10:22:27.145881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.94.2:46772->192.168.94.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 10:22:27.145934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.94.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.94.2:46760->192.168.94.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:22:29.871123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.94.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0804 10:22:31.274287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.94.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0804 10:22:36.782756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:22:42.402417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.94.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0804 10:22:47.159983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.94.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0804 10:22:49.813152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.94.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0804 10:22:52.179155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0804 10:22:59.089161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.94.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0804 10:23:00.091087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0804 10:23:00.915639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0804 10:23:02.311447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0804 10:23:04.693215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.94.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0804 10:23:04.865345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.94.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0804 10:23:14.539088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.94.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0804 10:23:15.603139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.94.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0804 10:23:15.852824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.94.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.94.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	
	
	==> kubelet <==
	Aug 04 10:22:59 no-preload-499486 kubelet[1550]: E0804 10:22:59.252242    1550 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.94.2:8443/api/v1/namespaces/default/events/no-preload-499486.1858883701518ab4\": dial tcp 192.168.94.2:8443: connect: connection refused" event="&Event{ObjectMeta:{no-preload-499486.1858883701518ab4  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:no-preload-499486,UID:no-preload-499486,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node no-preload-499486 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:no-preload-499486,},FirstTimestamp:2025-08-04 10:03:44.687508148 +0000 UTC m=+0.105538214,LastTimestamp:2025-08-04 10:03:44.785550698 +0000 UTC m=+0.203580765,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:no-preload-499486,}"
	Aug 04 10:23:00 no-preload-499486 kubelet[1550]: E0804 10:23:00.685673    1550 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"no-preload-499486\" not found" node="no-preload-499486"
	Aug 04 10:23:00 no-preload-499486 kubelet[1550]: I0804 10:23:00.685768    1550 scope.go:117] "RemoveContainer" containerID="8b2dd847d5a1ff932688da26db8679de7d861c012cee032d9640c49d1eba8f5d"
	Aug 04 10:23:00 no-preload-499486 kubelet[1550]: E0804 10:23:00.685957    1550 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=etcd pod=etcd-no-preload-499486_kube-system(c3193c4a9a9a9175b95883d7fe1bad87)\"" pod="kube-system/etcd-no-preload-499486" podUID="c3193c4a9a9a9175b95883d7fe1bad87"
	Aug 04 10:23:01 no-preload-499486 kubelet[1550]: I0804 10:23:01.153643    1550 kubelet_node_status.go:75] "Attempting to register node" node="no-preload-499486"
	Aug 04 10:23:01 no-preload-499486 kubelet[1550]: E0804 10:23:01.154061    1550 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.94.2:8443/api/v1/nodes\": dial tcp 192.168.94.2:8443: connect: connection refused" node="no-preload-499486"
	Aug 04 10:23:01 no-preload-499486 kubelet[1550]: E0804 10:23:01.245703    1550 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.94.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/no-preload-499486?timeout=10s\": dial tcp 192.168.94.2:8443: connect: connection refused" interval="7s"
	Aug 04 10:23:04 no-preload-499486 kubelet[1550]: E0804 10:23:04.777545    1550 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"no-preload-499486\" not found"
	Aug 04 10:23:06 no-preload-499486 kubelet[1550]: E0804 10:23:06.685266    1550 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"no-preload-499486\" not found" node="no-preload-499486"
	Aug 04 10:23:06 no-preload-499486 kubelet[1550]: I0804 10:23:06.685349    1550 scope.go:117] "RemoveContainer" containerID="8fae444e948796e6ad5ea1f211a7c937b7a2fddf282b4f5ebfd98f5d3cbbf42c"
	Aug 04 10:23:06 no-preload-499486 kubelet[1550]: E0804 10:23:06.685501    1550 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-no-preload-499486_kube-system(f4c9aec0fc04dec0ce14ce1fda478878)\"" pod="kube-system/kube-apiserver-no-preload-499486" podUID="f4c9aec0fc04dec0ce14ce1fda478878"
	Aug 04 10:23:08 no-preload-499486 kubelet[1550]: I0804 10:23:08.155302    1550 kubelet_node_status.go:75] "Attempting to register node" node="no-preload-499486"
	Aug 04 10:23:08 no-preload-499486 kubelet[1550]: E0804 10:23:08.155726    1550 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.94.2:8443/api/v1/nodes\": dial tcp 192.168.94.2:8443: connect: connection refused" node="no-preload-499486"
	Aug 04 10:23:08 no-preload-499486 kubelet[1550]: E0804 10:23:08.246789    1550 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.94.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/no-preload-499486?timeout=10s\": dial tcp 192.168.94.2:8443: connect: connection refused" interval="7s"
	Aug 04 10:23:09 no-preload-499486 kubelet[1550]: E0804 10:23:09.253663    1550 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.94.2:8443/api/v1/namespaces/default/events/no-preload-499486.1858883701518ab4\": dial tcp 192.168.94.2:8443: connect: connection refused" event="&Event{ObjectMeta:{no-preload-499486.1858883701518ab4  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:no-preload-499486,UID:no-preload-499486,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node no-preload-499486 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:no-preload-499486,},FirstTimestamp:2025-08-04 10:03:44.687508148 +0000 UTC m=+0.105538214,LastTimestamp:2025-08-04 10:03:44.785550698 +0000 UTC m=+0.203580765,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:no-preload-499486,}"
	Aug 04 10:23:09 no-preload-499486 kubelet[1550]: E0804 10:23:09.685018    1550 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"no-preload-499486\" not found" node="no-preload-499486"
	Aug 04 10:23:09 no-preload-499486 kubelet[1550]: I0804 10:23:09.685110    1550 scope.go:117] "RemoveContainer" containerID="72d2af0d249a1712e512c59b855d980cf09091db17d72b49f1e50c1244e733e5"
	Aug 04 10:23:09 no-preload-499486 kubelet[1550]: E0804 10:23:09.685321    1550 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-no-preload-499486_kube-system(a4b1d6b4ed5bdfde5a36a79a8a11f1a7)\"" pod="kube-system/kube-controller-manager-no-preload-499486" podUID="a4b1d6b4ed5bdfde5a36a79a8a11f1a7"
	Aug 04 10:23:12 no-preload-499486 kubelet[1550]: E0804 10:23:12.685725    1550 kubelet.go:3050] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"no-preload-499486\" not found" node="no-preload-499486"
	Aug 04 10:23:12 no-preload-499486 kubelet[1550]: I0804 10:23:12.685810    1550 scope.go:117] "RemoveContainer" containerID="8b2dd847d5a1ff932688da26db8679de7d861c012cee032d9640c49d1eba8f5d"
	Aug 04 10:23:12 no-preload-499486 kubelet[1550]: E0804 10:23:12.685956    1550 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=etcd pod=etcd-no-preload-499486_kube-system(c3193c4a9a9a9175b95883d7fe1bad87)\"" pod="kube-system/etcd-no-preload-499486" podUID="c3193c4a9a9a9175b95883d7fe1bad87"
	Aug 04 10:23:14 no-preload-499486 kubelet[1550]: E0804 10:23:14.778391    1550 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"no-preload-499486\" not found"
	Aug 04 10:23:15 no-preload-499486 kubelet[1550]: I0804 10:23:15.156513    1550 kubelet_node_status.go:75] "Attempting to register node" node="no-preload-499486"
	Aug 04 10:23:15 no-preload-499486 kubelet[1550]: E0804 10:23:15.156959    1550 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.94.2:8443/api/v1/nodes\": dial tcp 192.168.94.2:8443: connect: connection refused" node="no-preload-499486"
	Aug 04 10:23:15 no-preload-499486 kubelet[1550]: E0804 10:23:15.247866    1550 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.94.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/no-preload-499486?timeout=10s\": dial tcp 192.168.94.2:8443: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-499486 -n no-preload-499486
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-499486 -n no-preload-499486: exit status 2 (266.707647ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "no-preload-499486" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (267.35s)

                                                
                                    

Test pass (367/431)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.74
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.33.3/json-events 17.51
13 TestDownloadOnly/v1.33.3/preload-exists 0
17 TestDownloadOnly/v1.33.3/LogsDuration 0.06
18 TestDownloadOnly/v1.33.3/DeleteAll 0.19
19 TestDownloadOnly/v1.33.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.34.0-beta.0/json-events 25.2
22 TestDownloadOnly/v1.34.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.34.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.34.0-beta.0/DeleteAll 0.2
28 TestDownloadOnly/v1.34.0-beta.0/DeleteAlwaysSucceeds 0.13
29 TestDownloadOnlyKic 1.08
30 TestBinaryMirror 0.77
31 TestOffline 77.89
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 222.95
38 TestAddons/serial/Volcano 41.73
40 TestAddons/serial/GCPAuth/Namespaces 0.11
41 TestAddons/serial/GCPAuth/FakeCredentials 10.44
44 TestAddons/parallel/Registry 18.34
45 TestAddons/parallel/RegistryCreds 0.54
46 TestAddons/parallel/Ingress 21.67
47 TestAddons/parallel/InspektorGadget 5.27
48 TestAddons/parallel/MetricsServer 5.6
50 TestAddons/parallel/CSI 63.4
51 TestAddons/parallel/Headlamp 22.4
52 TestAddons/parallel/CloudSpanner 5.45
53 TestAddons/parallel/LocalPath 58.63
54 TestAddons/parallel/NvidiaDevicePlugin 6.44
55 TestAddons/parallel/Yakd 11.57
56 TestAddons/parallel/AmdGpuDevicePlugin 6.4
57 TestAddons/StoppedEnableDisable 11.05
58 TestCertOptions 35.09
59 TestCertExpiration 255.68
60 TestDockerFlags 31.59
61 TestForceSystemdFlag 29
62 TestForceSystemdEnv 30.14
64 TestKVMDriverInstallOrUpdate 4.25
68 TestErrorSpam/setup 28.14
69 TestErrorSpam/start 0.57
70 TestErrorSpam/status 0.84
71 TestErrorSpam/pause 1.15
72 TestErrorSpam/unpause 1.4
73 TestErrorSpam/stop 10.84
76 TestFunctional/serial/CopySyncFile 0
77 TestFunctional/serial/StartWithProxy 66.7
78 TestFunctional/serial/AuditLog 0
79 TestFunctional/serial/SoftStart 86.77
80 TestFunctional/serial/KubeContext 0.04
81 TestFunctional/serial/KubectlGetPods 0.12
84 TestFunctional/serial/CacheCmd/cache/add_remote 2.31
85 TestFunctional/serial/CacheCmd/cache/add_local 2.28
86 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
87 TestFunctional/serial/CacheCmd/cache/list 0.05
88 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
89 TestFunctional/serial/CacheCmd/cache/cache_reload 1.24
90 TestFunctional/serial/CacheCmd/cache/delete 0.1
91 TestFunctional/serial/MinikubeKubectlCmd 0.11
92 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
93 TestFunctional/serial/ExtraConfig 41.62
94 TestFunctional/serial/ComponentHealth 0.06
95 TestFunctional/serial/LogsCmd 0.92
96 TestFunctional/serial/LogsFileCmd 0.95
97 TestFunctional/serial/InvalidService 4.99
99 TestFunctional/parallel/ConfigCmd 0.39
100 TestFunctional/parallel/DashboardCmd 34.52
101 TestFunctional/parallel/DryRun 0.4
102 TestFunctional/parallel/InternationalLanguage 0.17
103 TestFunctional/parallel/StatusCmd 0.95
107 TestFunctional/parallel/ServiceCmdConnect 12.6
108 TestFunctional/parallel/AddonsCmd 0.15
109 TestFunctional/parallel/PersistentVolumeClaim 52.2
111 TestFunctional/parallel/SSHCmd 0.67
112 TestFunctional/parallel/CpCmd 1.91
113 TestFunctional/parallel/MySQL 26.26
114 TestFunctional/parallel/FileSync 0.25
115 TestFunctional/parallel/CertSync 1.73
119 TestFunctional/parallel/NodeLabels 0.06
121 TestFunctional/parallel/NonActiveRuntimeDisabled 0.24
123 TestFunctional/parallel/License 0.72
124 TestFunctional/parallel/ServiceCmd/DeployApp 9.21
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.46
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.31
130 TestFunctional/parallel/ServiceCmd/List 0.47
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.47
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
133 TestFunctional/parallel/ServiceCmd/Format 0.31
134 TestFunctional/parallel/ServiceCmd/URL 0.31
135 TestFunctional/parallel/Version/short 0.06
136 TestFunctional/parallel/Version/components 0.91
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
141 TestFunctional/parallel/ImageCommands/ImageBuild 5.13
142 TestFunctional/parallel/ImageCommands/Setup 3.55
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
149 TestFunctional/parallel/DockerEnv/bash 1
150 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.06
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
154 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.81
155 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.46
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
157 TestFunctional/parallel/ProfileCmd/profile_list 0.35
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
159 TestFunctional/parallel/MountCmd/any-port 17.52
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.4
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.66
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.45
164 TestFunctional/parallel/MountCmd/specific-port 1.86
165 TestFunctional/parallel/MountCmd/VerifyCleanup 1.8
166 TestFunctional/delete_echo-server_images 0.04
167 TestFunctional/delete_my-image_image 0.02
168 TestFunctional/delete_minikube_cached_images 0.02
172 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CopySyncFile 0
174 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/AuditLog 0
176 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/KubeContext 0.04
180 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/add_remote 2.08
181 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/add_local 2.29
182 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.05
183 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/list 0.05
184 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.26
185 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/cache_reload 1.2
186 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/delete 0.1
191 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/LogsCmd 0.74
192 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/LogsFileCmd 0.77
195 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ConfigCmd 0.35
197 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DryRun 0.41
198 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/InternationalLanguage 0.15
204 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/AddonsCmd 0.15
207 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/SSHCmd 0.53
208 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/CpCmd 1.73
210 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/FileSync 0.29
211 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/CertSync 1.76
217 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/NonActiveRuntimeDisabled 0.3
219 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/License 0.57
223 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ProfileCmd/profile_not_create 0.45
224 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/Version/short 0.06
225 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/Version/components 0.48
228 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ProfileCmd/profile_list 0.37
230 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ProfileCmd/profile_json_output 0.41
232 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListShort 0.21
233 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListTable 0.21
234 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListJson 0.21
235 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListYaml 0.21
236 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageBuild 4.93
237 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/Setup 1.8
238 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MountCmd/specific-port 2.17
239 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.16
240 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.96
241 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MountCmd/VerifyCleanup 1.03
242 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 2.57
244 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/UpdateContextCmd/no_changes 0.14
245 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.14
246 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.14
249 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.36
250 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
254 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageRemove 0.41
255 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.57
256 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.38
260 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
261 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/delete_echo-server_images 0.04
262 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/delete_my-image_image 0.02
263 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/delete_minikube_cached_images 0.01
267 TestMultiControlPlane/serial/StartCluster 101.4
268 TestMultiControlPlane/serial/DeployApp 38.18
269 TestMultiControlPlane/serial/PingHostFromPods 1.11
270 TestMultiControlPlane/serial/AddWorkerNode 14.12
271 TestMultiControlPlane/serial/NodeLabels 0.07
272 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
273 TestMultiControlPlane/serial/CopyFile 16.14
274 TestMultiControlPlane/serial/StopSecondaryNode 11.39
275 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
276 TestMultiControlPlane/serial/RestartSecondaryNode 36.83
277 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.94
278 TestMultiControlPlane/serial/RestartClusterKeepsNodes 152.78
279 TestMultiControlPlane/serial/DeleteSecondaryNode 9.29
280 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
281 TestMultiControlPlane/serial/StopCluster 32.56
282 TestMultiControlPlane/serial/RestartCluster 91.3
283 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
284 TestMultiControlPlane/serial/AddSecondaryNode 26.14
285 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
288 TestImageBuild/serial/Setup 26.45
289 TestImageBuild/serial/NormalBuild 1.05
290 TestImageBuild/serial/BuildWithBuildArg 0.65
291 TestImageBuild/serial/BuildWithDockerIgnore 0.46
292 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.49
296 TestJSONOutput/start/Command 71.27
297 TestJSONOutput/start/Audit 0
299 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/pause/Command 0.48
303 TestJSONOutput/pause/Audit 0
305 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
308 TestJSONOutput/unpause/Command 0.47
309 TestJSONOutput/unpause/Audit 0
311 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
312 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
314 TestJSONOutput/stop/Command 10.81
315 TestJSONOutput/stop/Audit 0
317 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
318 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
319 TestErrorJSONOutput 0.2
321 TestKicCustomNetwork/create_custom_network 28.27
322 TestKicCustomNetwork/use_default_bridge_network 26.82
323 TestKicExistingNetwork 27.3
324 TestKicCustomSubnet 28.15
325 TestKicStaticIP 27.67
326 TestMainNoArgs 0.05
327 TestMinikubeProfile 56.94
330 TestMountStart/serial/StartWithMountFirst 10.56
331 TestMountStart/serial/VerifyMountFirst 0.24
332 TestMountStart/serial/StartWithMountSecond 10.67
333 TestMountStart/serial/VerifyMountSecond 0.25
334 TestMountStart/serial/DeleteFirst 1.44
335 TestMountStart/serial/VerifyMountPostDelete 0.24
336 TestMountStart/serial/Stop 1.17
337 TestMountStart/serial/RestartStopped 9.02
338 TestMountStart/serial/VerifyMountPostStop 0.24
341 TestMultiNode/serial/FreshStart2Nodes 62.39
342 TestMultiNode/serial/DeployApp2Nodes 58.85
343 TestMultiNode/serial/PingHostFrom2Pods 0.73
344 TestMultiNode/serial/AddNode 14.12
345 TestMultiNode/serial/MultiNodeLabels 0.07
346 TestMultiNode/serial/ProfileList 0.63
347 TestMultiNode/serial/CopyFile 9.25
348 TestMultiNode/serial/StopNode 2.09
349 TestMultiNode/serial/StartAfterStop 8.74
350 TestMultiNode/serial/RestartKeepsNodes 76.5
351 TestMultiNode/serial/DeleteNode 5.19
352 TestMultiNode/serial/StopMultiNode 21.53
353 TestMultiNode/serial/RestartMultiNode 56.98
354 TestMultiNode/serial/ValidateNameConflict 27.32
359 TestPreload 110.7
361 TestScheduledStopUnix 99.09
362 TestSkaffold 111.84
364 TestInsufficientStorage 9.83
365 TestRunningBinaryUpgrade 101.63
368 TestMissingContainerUpgrade 199.57
369 TestStoppedBinaryUpgrade/Setup 3.25
370 TestStoppedBinaryUpgrade/Upgrade 194.06
379 TestPause/serial/Start 76.03
380 TestStoppedBinaryUpgrade/MinikubeLogs 1.17
382 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
383 TestNoKubernetes/serial/StartWithK8s 31.69
395 TestNoKubernetes/serial/StartWithStopK8s 17.49
396 TestNoKubernetes/serial/Start 7.33
397 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
398 TestNoKubernetes/serial/ProfileList 30.63
399 TestPause/serial/SecondStartNoReconfiguration 75.76
400 TestNoKubernetes/serial/Stop 1.26
401 TestNoKubernetes/serial/StartNoArgs 8.39
402 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
404 TestStartStop/group/old-k8s-version/serial/FirstStart 115.9
405 TestPause/serial/Pause 0.57
406 TestPause/serial/VerifyStatus 0.3
407 TestPause/serial/Unpause 0.73
408 TestPause/serial/PauseAgain 0.73
409 TestPause/serial/DeletePaused 2.47
410 TestPause/serial/VerifyDeletedResources 16.32
412 TestStartStop/group/embed-certs/serial/FirstStart 76.15
413 TestStartStop/group/embed-certs/serial/DeployApp 11.28
414 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.76
415 TestStartStop/group/embed-certs/serial/Stop 10.75
416 TestStartStop/group/old-k8s-version/serial/DeployApp 10.35
417 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
418 TestStartStop/group/embed-certs/serial/SecondStart 52.45
419 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.79
420 TestStartStop/group/old-k8s-version/serial/Stop 10.8
421 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
422 TestStartStop/group/old-k8s-version/serial/SecondStart 114.33
423 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
424 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
425 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 1.45
426 TestStartStop/group/embed-certs/serial/Pause 2.5
430 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 68.98
431 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
432 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
433 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
434 TestStartStop/group/old-k8s-version/serial/Pause 2.29
437 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.24
438 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.84
439 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.72
440 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
441 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.4
442 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
443 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
444 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 1.45
445 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.31
446 TestNetworkPlugins/group/auto/Start 69.82
447 TestNetworkPlugins/group/auto/KubeletFlags 0.25
448 TestNetworkPlugins/group/auto/NetCatPod 9.19
449 TestNetworkPlugins/group/auto/DNS 0.12
450 TestNetworkPlugins/group/auto/Localhost 0.1
451 TestNetworkPlugins/group/auto/HairPin 0.11
452 TestNetworkPlugins/group/calico/Start 60.84
453 TestNetworkPlugins/group/custom-flannel/Start 52.06
454 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
455 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.18
456 TestNetworkPlugins/group/calico/ControllerPod 6.01
457 TestNetworkPlugins/group/calico/KubeletFlags 0.26
458 TestNetworkPlugins/group/calico/NetCatPod 8.19
459 TestNetworkPlugins/group/custom-flannel/DNS 0.16
460 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
461 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
462 TestNetworkPlugins/group/calico/DNS 0.17
463 TestNetworkPlugins/group/calico/Localhost 0.12
464 TestNetworkPlugins/group/calico/HairPin 0.11
465 TestNetworkPlugins/group/false/Start 82.34
466 TestNetworkPlugins/group/kindnet/Start 60.31
467 TestNetworkPlugins/group/kindnet/ControllerPod 6
468 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
469 TestNetworkPlugins/group/kindnet/NetCatPod 9.19
470 TestNetworkPlugins/group/false/KubeletFlags 0.26
471 TestNetworkPlugins/group/false/NetCatPod 9.17
472 TestNetworkPlugins/group/kindnet/DNS 0.13
473 TestNetworkPlugins/group/kindnet/Localhost 0.14
474 TestNetworkPlugins/group/kindnet/HairPin 0.11
475 TestNetworkPlugins/group/false/DNS 0.14
476 TestNetworkPlugins/group/false/Localhost 0.12
477 TestNetworkPlugins/group/false/HairPin 0.11
478 TestNetworkPlugins/group/flannel/Start 79.34
479 TestNetworkPlugins/group/enable-default-cni/Start 71.75
482 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
483 TestNetworkPlugins/group/flannel/ControllerPod 6.01
484 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.18
485 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
486 TestNetworkPlugins/group/flannel/NetCatPod 9.17
487 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
488 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
489 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
490 TestNetworkPlugins/group/flannel/DNS 0.13
491 TestNetworkPlugins/group/flannel/Localhost 0.11
492 TestNetworkPlugins/group/flannel/HairPin 0.11
493 TestNetworkPlugins/group/bridge/Start 68.3
494 TestStartStop/group/newest-cni/serial/DeployApp 0
496 TestNetworkPlugins/group/kubenet/Start 64.89
497 TestStartStop/group/no-preload/serial/Stop 1.19
498 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
500 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
501 TestNetworkPlugins/group/bridge/NetCatPod 9.17
502 TestNetworkPlugins/group/kubenet/KubeletFlags 0.25
503 TestNetworkPlugins/group/kubenet/NetCatPod 9.18
504 TestNetworkPlugins/group/bridge/DNS 0.13
505 TestNetworkPlugins/group/bridge/Localhost 0.11
506 TestNetworkPlugins/group/bridge/HairPin 0.11
507 TestNetworkPlugins/group/kubenet/DNS 0.12
508 TestNetworkPlugins/group/kubenet/Localhost 0.1
509 TestNetworkPlugins/group/kubenet/HairPin 0.1
510 TestStartStop/group/newest-cni/serial/Stop 1.19
511 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.28
513 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
514 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
515 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 1.38
x
+
TestDownloadOnly/v1.20.0/json-events (16.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-723176 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-723176 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (16.73799181s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (16.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0804 08:34:34.386328 1582690 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0804 08:34:34.386443 1582690 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-723176
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-723176: exit status 85 (60.348877ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-723176 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-723176 │ jenkins │ v1.36.0 │ 04 Aug 25 08:34 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 08:34:17
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 08:34:17.690375 1582702 out.go:345] Setting OutFile to fd 1 ...
	I0804 08:34:17.690610 1582702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:34:17.690620 1582702 out.go:358] Setting ErrFile to fd 2...
	I0804 08:34:17.690624 1582702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:34:17.690786 1582702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	W0804 08:34:17.690916 1582702 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21223-1578987/.minikube/config/config.json: open /home/jenkins/minikube-integration/21223-1578987/.minikube/config/config.json: no such file or directory
	I0804 08:34:17.691497 1582702 out.go:352] Setting JSON to true
	I0804 08:34:17.692457 1582702 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":148547,"bootTime":1754147911,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 08:34:17.692508 1582702 start.go:140] virtualization: kvm guest
	I0804 08:34:17.694433 1582702 out.go:97] [download-only-723176] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0804 08:34:17.694549 1582702 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball: no such file or directory
	I0804 08:34:17.694598 1582702 notify.go:220] Checking for updates...
	I0804 08:34:17.695561 1582702 out.go:169] MINIKUBE_LOCATION=21223
	I0804 08:34:17.697019 1582702 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 08:34:17.698205 1582702 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:34:17.699151 1582702 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 08:34:17.700157 1582702 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0804 08:34:17.702174 1582702 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0804 08:34:17.702395 1582702 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 08:34:17.723835 1582702 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 08:34:17.723919 1582702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 08:34:18.033202 1582702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:62 SystemTime:2025-08-04 08:34:18.023623921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 08:34:18.033343 1582702 docker.go:318] overlay module found
	I0804 08:34:18.034685 1582702 out.go:97] Using the docker driver based on user configuration
	I0804 08:34:18.034707 1582702 start.go:304] selected driver: docker
	I0804 08:34:18.034716 1582702 start.go:918] validating driver "docker" against <nil>
	I0804 08:34:18.034807 1582702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 08:34:18.085871 1582702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:62 SystemTime:2025-08-04 08:34:18.077377139 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 08:34:18.086028 1582702 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0804 08:34:18.086505 1582702 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0804 08:34:18.086695 1582702 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I0804 08:34:18.088204 1582702 out.go:169] Using Docker driver with root privileges
	I0804 08:34:18.089172 1582702 cni.go:84] Creating CNI manager for ""
	I0804 08:34:18.089278 1582702 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0804 08:34:18.089358 1582702 start.go:348] cluster config:
	{Name:download-only-723176 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-723176 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:
1m0s}
	I0804 08:34:18.090364 1582702 out.go:97] Starting "download-only-723176" primary control-plane node in "download-only-723176" cluster
	I0804 08:34:18.090383 1582702 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 08:34:18.091309 1582702 out.go:97] Pulling base image v0.0.47-1753871403-21198 ...
	I0804 08:34:18.091335 1582702 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0804 08:34:18.091441 1582702 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 08:34:18.107669 1582702 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d to local cache
	I0804 08:34:18.107869 1582702 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local cache directory
	I0804 08:34:18.107948 1582702 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d to local cache
	I0804 08:34:18.404563 1582702 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0804 08:34:18.404602 1582702 cache.go:56] Caching tarball of preloaded images
	I0804 08:34:18.404784 1582702 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0804 08:34:18.406437 1582702 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0804 08:34:18.406467 1582702 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0804 08:34:18.563694 1582702 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0804 08:34:32.160708 1582702 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0804 08:34:32.160802 1582702 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-723176 host does not exist
	  To start a cluster, run: "minikube start -p download-only-723176"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-723176
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.3/json-events (17.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-348320 --force --alsologtostderr --kubernetes-version=v1.33.3 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-348320 --force --alsologtostderr --kubernetes-version=v1.33.3 --container-runtime=docker --driver=docker  --container-runtime=docker: (17.50694723s)
--- PASS: TestDownloadOnly/v1.33.3/json-events (17.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.3/preload-exists
I0804 08:34:52.280751 1582690 preload.go:131] Checking if preload exists for k8s version v1.33.3 and runtime docker
I0804 08:34:52.280799 1582690 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.3-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.33.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-348320
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-348320: exit status 85 (62.111189ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-723176 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-723176 │ jenkins │ v1.36.0 │ 04 Aug 25 08:34 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.36.0 │ 04 Aug 25 08:34 UTC │ 04 Aug 25 08:34 UTC │
	│ delete  │ -p download-only-723176                                                                                                                                                       │ download-only-723176 │ jenkins │ v1.36.0 │ 04 Aug 25 08:34 UTC │ 04 Aug 25 08:34 UTC │
	│ start   │ -o=json --download-only -p download-only-348320 --force --alsologtostderr --kubernetes-version=v1.33.3 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-348320 │ jenkins │ v1.36.0 │ 04 Aug 25 08:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 08:34:34
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 08:34:34.815494 1583091 out.go:345] Setting OutFile to fd 1 ...
	I0804 08:34:34.815777 1583091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:34:34.815788 1583091 out.go:358] Setting ErrFile to fd 2...
	I0804 08:34:34.815792 1583091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:34:34.816012 1583091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 08:34:34.816616 1583091 out.go:352] Setting JSON to true
	I0804 08:34:34.817565 1583091 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":148564,"bootTime":1754147911,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 08:34:34.817658 1583091 start.go:140] virtualization: kvm guest
	I0804 08:34:34.819347 1583091 out.go:97] [download-only-348320] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 08:34:34.819503 1583091 notify.go:220] Checking for updates...
	I0804 08:34:34.820633 1583091 out.go:169] MINIKUBE_LOCATION=21223
	I0804 08:34:34.821784 1583091 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 08:34:34.822975 1583091 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:34:34.823953 1583091 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 08:34:34.824895 1583091 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0804 08:34:34.826487 1583091 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0804 08:34:34.826722 1583091 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 08:34:34.851383 1583091 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 08:34:34.851460 1583091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 08:34:34.896550 1583091 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2025-08-04 08:34:34.887649542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 08:34:34.896671 1583091 docker.go:318] overlay module found
	I0804 08:34:34.898026 1583091 out.go:97] Using the docker driver based on user configuration
	I0804 08:34:34.898049 1583091 start.go:304] selected driver: docker
	I0804 08:34:34.898054 1583091 start.go:918] validating driver "docker" against <nil>
	I0804 08:34:34.898130 1583091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 08:34:34.943747 1583091 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2025-08-04 08:34:34.93529022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 08:34:34.943965 1583091 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0804 08:34:34.944614 1583091 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0804 08:34:34.944818 1583091 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I0804 08:34:34.946272 1583091 out.go:169] Using Docker driver with root privileges
	I0804 08:34:34.947221 1583091 cni.go:84] Creating CNI manager for ""
	I0804 08:34:34.947305 1583091 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 08:34:34.947320 1583091 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0804 08:34:34.947409 1583091 start.go:348] cluster config:
	{Name:download-only-348320 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.3 ClusterName:download-only-348320 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterv
al:1m0s}
	I0804 08:34:34.948479 1583091 out.go:97] Starting "download-only-348320" primary control-plane node in "download-only-348320" cluster
	I0804 08:34:34.948513 1583091 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 08:34:34.949493 1583091 out.go:97] Pulling base image v0.0.47-1753871403-21198 ...
	I0804 08:34:34.949520 1583091 preload.go:131] Checking if preload exists for k8s version v1.33.3 and runtime docker
	I0804 08:34:34.949580 1583091 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 08:34:34.965252 1583091 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d to local cache
	I0804 08:34:34.965397 1583091 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local cache directory
	I0804 08:34:34.965416 1583091 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local cache directory, skipping pull
	I0804 08:34:34.965421 1583091 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in cache, skipping pull
	I0804 08:34:34.965432 1583091 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d as a tarball
	I0804 08:34:35.528729 1583091 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.33.3/preloaded-images-k8s-v18-v1.33.3-docker-overlay2-amd64.tar.lz4
	I0804 08:34:35.528784 1583091 cache.go:56] Caching tarball of preloaded images
	I0804 08:34:35.528955 1583091 preload.go:131] Checking if preload exists for k8s version v1.33.3 and runtime docker
	I0804 08:34:35.530410 1583091 out.go:97] Downloading Kubernetes v1.33.3 preload ...
	I0804 08:34:35.530428 1583091 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.33.3-docker-overlay2-amd64.tar.lz4 ...
	I0804 08:34:35.693818 1583091 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.33.3/preloaded-images-k8s-v18-v1.33.3-docker-overlay2-amd64.tar.lz4?checksum=md5:5ce5e52525711a67dbeaf90c49261a1d -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.3-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-348320 host does not exist
	  To start a cluster, run: "minikube start -p download-only-348320"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.33.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.3/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.33.3/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-348320
--- PASS: TestDownloadOnly/v1.33.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0-beta.0/json-events (25.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-769840 --force --alsologtostderr --kubernetes-version=v1.34.0-beta.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-769840 --force --alsologtostderr --kubernetes-version=v1.34.0-beta.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (25.195418667s)
--- PASS: TestDownloadOnly/v1.34.0-beta.0/json-events (25.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0-beta.0/preload-exists
I0804 08:35:17.858594 1582690 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
I0804 08:35:17.858652 1582690 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-769840
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-769840: exit status 85 (59.879671ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-723176 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker        │ download-only-723176 │ jenkins │ v1.36.0 │ 04 Aug 25 08:34 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                │ minikube             │ jenkins │ v1.36.0 │ 04 Aug 25 08:34 UTC │ 04 Aug 25 08:34 UTC │
	│ delete  │ -p download-only-723176                                                                                                                                                              │ download-only-723176 │ jenkins │ v1.36.0 │ 04 Aug 25 08:34 UTC │ 04 Aug 25 08:34 UTC │
	│ start   │ -o=json --download-only -p download-only-348320 --force --alsologtostderr --kubernetes-version=v1.33.3 --container-runtime=docker --driver=docker  --container-runtime=docker        │ download-only-348320 │ jenkins │ v1.36.0 │ 04 Aug 25 08:34 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                │ minikube             │ jenkins │ v1.36.0 │ 04 Aug 25 08:34 UTC │ 04 Aug 25 08:34 UTC │
	│ delete  │ -p download-only-348320                                                                                                                                                              │ download-only-348320 │ jenkins │ v1.36.0 │ 04 Aug 25 08:34 UTC │ 04 Aug 25 08:34 UTC │
	│ start   │ -o=json --download-only -p download-only-769840 --force --alsologtostderr --kubernetes-version=v1.34.0-beta.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-769840 │ jenkins │ v1.36.0 │ 04 Aug 25 08:34 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/08/04 08:34:52
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 08:34:52.703898 1583466 out.go:345] Setting OutFile to fd 1 ...
	I0804 08:34:52.704133 1583466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:34:52.704141 1583466 out.go:358] Setting ErrFile to fd 2...
	I0804 08:34:52.704145 1583466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:34:52.704318 1583466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 08:34:52.704837 1583466 out.go:352] Setting JSON to true
	I0804 08:34:52.705692 1583466 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":148582,"bootTime":1754147911,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 08:34:52.705792 1583466 start.go:140] virtualization: kvm guest
	I0804 08:34:52.707381 1583466 out.go:97] [download-only-769840] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 08:34:52.707545 1583466 notify.go:220] Checking for updates...
	I0804 08:34:52.708638 1583466 out.go:169] MINIKUBE_LOCATION=21223
	I0804 08:34:52.709908 1583466 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 08:34:52.711024 1583466 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:34:52.711956 1583466 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 08:34:52.712893 1583466 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0804 08:34:52.714454 1583466 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0804 08:34:52.714651 1583466 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 08:34:52.736550 1583466 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 08:34:52.736601 1583466 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 08:34:52.784361 1583466 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-08-04 08:34:52.77542693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 08:34:52.784479 1583466 docker.go:318] overlay module found
	I0804 08:34:52.785778 1583466 out.go:97] Using the docker driver based on user configuration
	I0804 08:34:52.785801 1583466 start.go:304] selected driver: docker
	I0804 08:34:52.785807 1583466 start.go:918] validating driver "docker" against <nil>
	I0804 08:34:52.785893 1583466 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 08:34:52.831640 1583466 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-08-04 08:34:52.822637786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 08:34:52.831796 1583466 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0804 08:34:52.832339 1583466 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0804 08:34:52.832528 1583466 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I0804 08:34:52.833951 1583466 out.go:169] Using Docker driver with root privileges
	I0804 08:34:52.834922 1583466 cni.go:84] Creating CNI manager for ""
	I0804 08:34:52.834999 1583466 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 08:34:52.835012 1583466 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0804 08:34:52.835096 1583466 start.go:348] cluster config:
	{Name:download-only-769840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:download-only-769840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I0804 08:34:52.836139 1583466 out.go:97] Starting "download-only-769840" primary control-plane node in "download-only-769840" cluster
	I0804 08:34:52.836158 1583466 cache.go:121] Beginning downloading kic base image for docker with docker
	I0804 08:34:52.837025 1583466 out.go:97] Pulling base image v0.0.47-1753871403-21198 ...
	I0804 08:34:52.837047 1583466 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 08:34:52.837152 1583466 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local docker daemon
	I0804 08:34:52.853295 1583466 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d to local cache
	I0804 08:34:52.853491 1583466 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local cache directory
	I0804 08:34:52.853514 1583466 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d in local cache directory, skipping pull
	I0804 08:34:52.853520 1583466 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d exists in cache, skipping pull
	I0804 08:34:52.853532 1583466 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d as a tarball
	I0804 08:34:53.416384 1583466 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0-beta.0/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 08:34:53.416432 1583466 cache.go:56] Caching tarball of preloaded images
	I0804 08:34:53.416653 1583466 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 08:34:53.418125 1583466 out.go:97] Downloading Kubernetes v1.34.0-beta.0 preload ...
	I0804 08:34:53.418146 1583466 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0804 08:34:53.573651 1583466 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0-beta.0/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:0be61c8e3b1d16f1fde5ef9ea9672941 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0804 08:35:06.115147 1583466 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0804 08:35:06.115237 1583466 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0804 08:35:06.755767 1583466 cache.go:59] Finished verifying existence of preloaded tar for v1.34.0-beta.0 on docker
	I0804 08:35:06.756144 1583466 profile.go:143] Saving config to /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/download-only-769840/config.json ...
	I0804 08:35:06.756181 1583466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/download-only-769840/config.json: {Name:mkbb9f8c67c4bef0bc3753ead68314f8e17ee23d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 08:35:06.756395 1583466 preload.go:131] Checking if preload exists for k8s version v1.34.0-beta.0 and runtime docker
	I0804 08:35:06.756627 1583466 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21223-1578987/.minikube/cache/linux/amd64/v1.34.0-beta.0/kubectl
	
	
	* The control-plane node download-only-769840 host does not exist
	  To start a cluster, run: "minikube start -p download-only-769840"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0-beta.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0-beta.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-769840
--- PASS: TestDownloadOnly/v1.34.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.08s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-181307 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-181307" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-181307
--- PASS: TestDownloadOnlyKic (1.08s)

                                                
                                    
x
+
TestBinaryMirror (0.77s)

                                                
                                                
=== RUN   TestBinaryMirror
I0804 08:35:19.711024 1582690 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-436484 --alsologtostderr --binary-mirror http://127.0.0.1:42555 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-436484" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-436484
--- PASS: TestBinaryMirror (0.77s)

                                                
                                    
x
+
TestOffline (77.89s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-374131 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-374131 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m15.74833643s)
helpers_test.go:175: Cleaning up "offline-docker-374131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-374131
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-374131: (2.140158239s)
--- PASS: TestOffline (77.89s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-309866
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-309866: exit status 85 (52.901935ms)

                                                
                                                
-- stdout --
	* Profile "addons-309866" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-309866"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-309866
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-309866: exit status 85 (54.220568ms)

                                                
                                                
-- stdout --
	* Profile "addons-309866" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-309866"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (222.95s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-309866 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-309866 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m42.954341619s)
--- PASS: TestAddons/Setup (222.95s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.73s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 10.121197ms
addons_test.go:868: volcano-scheduler stabilized in 10.166168ms
addons_test.go:876: volcano-admission stabilized in 10.206412ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-854568c9bb-4chpw" [d445807b-2108-44e6-8599-e7df53f4f492] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003443003s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-55859c8887-qzdtv" [4ab27edc-6ad1-4981-a543-59a4732156ea] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003274964s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-7b774bbd55-ngjhb" [0af8cca8-8e3d-4e84-b044-fb70240141ec] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003706248s
addons_test.go:903: (dbg) Run:  kubectl --context addons-309866 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-309866 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-309866 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [c25cb57b-7517-4e34-9285-4f01009a4d46] Pending
helpers_test.go:344: "test-job-nginx-0" [c25cb57b-7517-4e34-9285-4f01009a4d46] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [c25cb57b-7517-4e34-9285-4f01009a4d46] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.003315772s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-309866 addons disable volcano --alsologtostderr -v=1: (11.374791607s)
--- PASS: TestAddons/serial/Volcano (41.73s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-309866 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-309866 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-309866 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-309866 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2c18085b-7dea-4403-83b6-f92e1d7b514e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2c18085b-7dea-4403-83b6-f92e1d7b514e] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003985555s
addons_test.go:694: (dbg) Run:  kubectl --context addons-309866 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-309866 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-309866 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.44s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 2.153187ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-694bd45846-6gdnx" [797ab9a9-028a-4f06-8007-616a7a79c2af] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002900259s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5vgrw" [34ae147f-28b4-4941-b76e-9b6b2efe1d48] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003032188s
addons_test.go:392: (dbg) Run:  kubectl --context addons-309866 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-309866 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-309866 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.605463518s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 ip
2025/08/04 08:40:22 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.34s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.54s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 1.519555ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-309866
addons_test.go:332: (dbg) Run:  kubectl --context addons-309866 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-309866 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-309866 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-309866 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2a0ca0e3-08f7-4617-a82e-4b37c2aaed86] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2a0ca0e3-08f7-4617-a82e-4b37c2aaed86] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003460549s
I0804 08:40:40.910259 1582690 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-309866 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-309866 addons disable ingress --alsologtostderr -v=1: (7.575894277s)
--- PASS: TestAddons/parallel/Ingress (21.67s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fqcdq" [0a73366c-dddb-49a6-a7ce-b7e095f5ac6c] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.068728982s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 1.950801ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-cdc7dfdcd-mjdgb" [1e8f9d84-d00e-4a07-8f9f-ef5ce02b0581] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003479545s
addons_test.go:463: (dbg) Run:  kubectl --context addons-309866 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.60s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0804 08:40:10.982501 1582690 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0804 08:40:10.986065 1582690 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0804 08:40:10.986098 1582690 kapi.go:107] duration metric: took 3.620531ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.635489ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-309866 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-309866 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [52213d79-9cc4-4a36-8f77-b6d8a4aa9660] Pending
helpers_test.go:344: "task-pv-pod" [52213d79-9cc4-4a36-8f77-b6d8a4aa9660] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [52213d79-9cc4-4a36-8f77-b6d8a4aa9660] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003852534s
addons_test.go:572: (dbg) Run:  kubectl --context addons-309866 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-309866 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-309866 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-309866 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-309866 delete pod task-pv-pod: (1.094534187s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-309866 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-309866 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-309866 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a0df1652-df19-4148-b46a-9c93d0200346] Pending
helpers_test.go:344: "task-pv-pod-restore" [a0df1652-df19-4148-b46a-9c93d0200346] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a0df1652-df19-4148-b46a-9c93d0200346] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003337118s
addons_test.go:614: (dbg) Run:  kubectl --context addons-309866 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-309866 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-309866 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-309866 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.485088074s)
--- PASS: TestAddons/parallel/CSI (63.40s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-309866 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-r4m7r" [4820e989-bcd3-4225-849d-71630a7b3e0c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-r4m7r" [4820e989-bcd3-4225-849d-71630a7b3e0c] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.003541931s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-309866 addons disable headlamp --alsologtostderr -v=1: (5.549706042s)
--- PASS: TestAddons/parallel/Headlamp (22.40s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6c9c9cb584-rrt7m" [8df486eb-c922-41e2-9f1c-b7a8a3eec9f3] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003272332s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.63s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-309866 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-309866 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309866 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a57262b4-a4d9-4dcf-8051-2293d229f26f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a57262b4-a4d9-4dcf-8051-2293d229f26f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a57262b4-a4d9-4dcf-8051-2293d229f26f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.002942346s
addons_test.go:967: (dbg) Run:  kubectl --context addons-309866 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 ssh "cat /opt/local-path-provisioner/pvc-ba3b5a4f-b590-4e77-9dbb-70d4d9803be0_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-309866 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-309866 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-309866 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.76265079s)
--- PASS: TestAddons/parallel/LocalPath (58.63s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.44s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-t2lgl" [2a79a477-bdd9-4a59-886f-9f8df3097a66] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003894662s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.44s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-5vrds" [7d076d02-3876-4cab-9fc0-f4625857a429] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00385036s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-309866 addons disable yakd --alsologtostderr -v=1: (5.567737044s)
--- PASS: TestAddons/parallel/Yakd (11.57s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.4s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-6kq8b" [b77f9150-7434-4fca-936e-9f59dea60808] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003183796s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-309866 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.05s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-309866
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-309866: (10.800234464s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-309866
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-309866
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-309866
--- PASS: TestAddons/StoppedEnableDisable (11.05s)

                                                
                                    
x
+
TestCertOptions (35.09s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-630320 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0804 09:49:46.064461 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-630320 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (32.441311434s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-630320 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-630320 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-630320 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-630320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-630320
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-630320: (2.084216671s)
--- PASS: TestCertOptions (35.09s)

                                                
                                    
x
+
TestCertExpiration (255.68s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-948981 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
E0804 09:49:25.583109 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-948981 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (29.707998365s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-948981 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-948981 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (43.818813447s)
helpers_test.go:175: Cleaning up "cert-expiration-948981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-948981
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-948981: (2.147668866s)
--- PASS: TestCertExpiration (255.68s)

                                                
                                    
x
+
TestDockerFlags (31.59s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-308103 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-308103 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (28.920134707s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-308103 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-308103 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-308103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-308103
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-308103: (2.143304479s)
--- PASS: TestDockerFlags (31.59s)

                                                
                                    
x
+
TestForceSystemdFlag (29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-729950 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0804 09:49:03.491997 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:49:05.087780 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:49:05.094173 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:49:05.105900 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:49:05.127298 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:49:05.168682 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:49:05.250633 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:49:05.412443 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:49:05.734688 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:49:06.375951 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:49:07.658231 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:49:10.219633 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-729950 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (26.492424173s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-729950 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-729950" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-729950
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-729950: (2.193902358s)
--- PASS: TestForceSystemdFlag (29.00s)

                                                
                                    
x
+
TestForceSystemdEnv (30.14s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-535183 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-535183 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (27.74435017s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-535183 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-535183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-535183
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-535183: (2.098675241s)
--- PASS: TestForceSystemdEnv (30.14s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.25s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0804 09:45:46.554412 1582690 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0804 09:45:46.554553 1582690 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0804 09:45:46.583800 1582690 install.go:62] docker-machine-driver-kvm2: exit status 1
W0804 09:45:46.583967 1582690 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0804 09:45:46.584030 1582690 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3535945281/001/docker-machine-driver-kvm2
I0804 09:45:47.297610 1582690 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3535945281/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x57fa4e0 0x57fa4e0 0x57fa4e0 0x57fa4e0 0x57fa4e0 0x57fa4e0 0x57fa4e0] Decompressors:map[bz2:0xc000781010 gz:0xc000781018 tar:0xc000780fa0 tar.bz2:0xc000780fc0 tar.gz:0xc000780fd0 tar.xz:0xc000780ff0 tar.zst:0xc000781000 tbz2:0xc000780fc0 tgz:0xc000780fd0 txz:0xc000780ff0 tzst:0xc000781000 xz:0xc000781020 zip:0xc000781030 zst:0xc000781028] Getters:map[file:0xc001db03c0 http:0xc001f2c370 https:0xc001f2c3c0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0804 09:45:47.297676 1582690 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3535945281/001/docker-machine-driver-kvm2
I0804 09:45:49.396869 1582690 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0804 09:45:49.396959 1582690 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0804 09:45:49.427385 1582690 install.go:137] /home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0804 09:45:49.427416 1582690 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0804 09:45:49.427475 1582690 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0804 09:45:49.427499 1582690 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3535945281/002/docker-machine-driver-kvm2
I0804 09:45:49.787000 1582690 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3535945281/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x57fa4e0 0x57fa4e0 0x57fa4e0 0x57fa4e0 0x57fa4e0 0x57fa4e0 0x57fa4e0] Decompressors:map[bz2:0xc000781010 gz:0xc000781018 tar:0xc000780fa0 tar.bz2:0xc000780fc0 tar.gz:0xc000780fd0 tar.xz:0xc000780ff0 tar.zst:0xc000781000 tbz2:0xc000780fc0 tgz:0xc000780fd0 txz:0xc000780ff0 tzst:0xc000781000 xz:0xc000781020 zip:0xc000781030 zst:0xc000781028] Getters:map[file:0xc00170d2f0 http:0xc000536500 https:0xc000536550] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0804 09:45:49.787063 1582690 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3535945281/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.25s)

                                                
                                    
x
+
TestErrorSpam/setup (28.14s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-992803 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-992803 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-992803 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-992803 --driver=docker  --container-runtime=docker: (28.137383592s)
--- PASS: TestErrorSpam/setup (28.14s)

                                                
                                    
x
+
TestErrorSpam/start (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992803 --log_dir /tmp/nospam-992803 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992803 --log_dir /tmp/nospam-992803 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992803 --log_dir /tmp/nospam-992803 start --dry-run
--- PASS: TestErrorSpam/start (0.57s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992803 --log_dir /tmp/nospam-992803 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992803 --log_dir /tmp/nospam-992803 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992803 --log_dir /tmp/nospam-992803 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992803 --log_dir /tmp/nospam-992803 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992803 --log_dir /tmp/nospam-992803 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992803 --log_dir /tmp/nospam-992803 pause
--- PASS: TestErrorSpam/pause (1.15s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992803 --log_dir /tmp/nospam-992803 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992803 --log_dir /tmp/nospam-992803 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992803 --log_dir /tmp/nospam-992803 unpause
--- PASS: TestErrorSpam/unpause (1.40s)

                                                
                                    
x
+
TestErrorSpam/stop (10.84s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992803 --log_dir /tmp/nospam-992803 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-992803 --log_dir /tmp/nospam-992803 stop: (10.661477979s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992803 --log_dir /tmp/nospam-992803 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992803 --log_dir /tmp/nospam-992803 stop
--- PASS: TestErrorSpam/stop (10.84s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66.7s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-114794 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-114794 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m6.702909794s)
--- PASS: TestFunctional/serial/StartWithProxy (66.70s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (86.77s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0804 08:43:19.482967 1582690 config.go:182] Loaded profile config "functional-114794": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-114794 --alsologtostderr -v=8
E0804 08:44:03.491608 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:44:03.498021 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:44:03.509302 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:44:03.530645 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:44:03.571999 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:44:03.653426 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:44:03.814972 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:44:04.136685 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:44:04.778461 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:44:06.060229 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:44:08.621987 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:44:13.743775 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:44:23.985211 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 08:44:44.466879 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-114794 --alsologtostderr -v=8: (1m26.772742482s)
functional_test.go:680: soft start took 1m26.773517341s for "functional-114794" cluster.
I0804 08:44:46.256101 1582690 config.go:182] Loaded profile config "functional-114794": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
--- PASS: TestFunctional/serial/SoftStart (86.77s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-114794 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-114794 /tmp/TestFunctionalserialCacheCmdcacheadd_local815305002/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 cache add minikube-local-cache-test:functional-114794
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-114794 cache add minikube-local-cache-test:functional-114794: (1.989942717s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 cache delete minikube-local-cache-test:functional-114794
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-114794
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-114794 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (259.788719ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 kubectl -- --context functional-114794 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-114794 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-114794 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0804 08:45:25.429441 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-114794 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.618959581s)
functional_test.go:778: restart took 41.619092457s for "functional-114794" cluster.
I0804 08:45:34.549818 1582690 config.go:182] Loaded profile config "functional-114794": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
--- PASS: TestFunctional/serial/ExtraConfig (41.62s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-114794 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 logs
--- PASS: TestFunctional/serial/LogsCmd (0.92s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 logs --file /tmp/TestFunctionalserialLogsFileCmd1358998703/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.95s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.99s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-114794 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-114794
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-114794: exit status 115 (323.423467ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32016 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-114794 delete -f testdata/invalidsvc.yaml
functional_test.go:2344: (dbg) Done: kubectl --context functional-114794 delete -f testdata/invalidsvc.yaml: (1.496172458s)
--- PASS: TestFunctional/serial/InvalidService (4.99s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-114794 config get cpus: exit status 14 (71.355338ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-114794 config get cpus: exit status 14 (62.408426ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (34.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-114794 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-114794 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1639937: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (34.52s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-114794 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-114794 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (162.657978ms)

                                                
                                                
-- stdout --
	* [functional-114794] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 08:46:03.288061 1639432 out.go:345] Setting OutFile to fd 1 ...
	I0804 08:46:03.288331 1639432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:46:03.288344 1639432 out.go:358] Setting ErrFile to fd 2...
	I0804 08:46:03.288350 1639432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:46:03.288557 1639432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 08:46:03.289205 1639432 out.go:352] Setting JSON to false
	I0804 08:46:03.290640 1639432 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":149252,"bootTime":1754147911,"procs":270,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 08:46:03.290767 1639432 start.go:140] virtualization: kvm guest
	I0804 08:46:03.292812 1639432 out.go:177] * [functional-114794] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 08:46:03.294188 1639432 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 08:46:03.294197 1639432 notify.go:220] Checking for updates...
	I0804 08:46:03.296663 1639432 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 08:46:03.297846 1639432 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:46:03.299061 1639432 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 08:46:03.300308 1639432 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 08:46:03.301587 1639432 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 08:46:03.303331 1639432 config.go:182] Loaded profile config "functional-114794": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
	I0804 08:46:03.304072 1639432 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 08:46:03.329811 1639432 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 08:46:03.329945 1639432 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 08:46:03.386999 1639432 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-08-04 08:46:03.376484364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 08:46:03.387107 1639432 docker.go:318] overlay module found
	I0804 08:46:03.389867 1639432 out.go:177] * Using the docker driver based on existing profile
	I0804 08:46:03.390934 1639432 start.go:304] selected driver: docker
	I0804 08:46:03.390955 1639432 start.go:918] validating driver "docker" against &{Name:functional-114794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.3 ClusterName:functional-114794 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.33.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 08:46:03.391062 1639432 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 08:46:03.393355 1639432 out.go:201] 
	W0804 08:46:03.394543 1639432 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0804 08:46:03.395727 1639432 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-114794 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-114794 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-114794 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (172.536756ms)

                                                
                                                
-- stdout --
	* [functional-114794] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 08:46:21.398037 1642337 out.go:345] Setting OutFile to fd 1 ...
	I0804 08:46:21.398189 1642337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:46:21.398201 1642337 out.go:358] Setting ErrFile to fd 2...
	I0804 08:46:21.398207 1642337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 08:46:21.398653 1642337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 08:46:21.399304 1642337 out.go:352] Setting JSON to false
	I0804 08:46:21.400725 1642337 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":149270,"bootTime":1754147911,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 08:46:21.400859 1642337 start.go:140] virtualization: kvm guest
	I0804 08:46:21.402803 1642337 out.go:177] * [functional-114794] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0804 08:46:21.404522 1642337 notify.go:220] Checking for updates...
	I0804 08:46:21.404539 1642337 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 08:46:21.405797 1642337 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 08:46:21.407181 1642337 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 08:46:21.408344 1642337 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 08:46:21.409533 1642337 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 08:46:21.410906 1642337 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 08:46:21.412611 1642337 config.go:182] Loaded profile config "functional-114794": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
	I0804 08:46:21.413291 1642337 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 08:46:21.439569 1642337 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 08:46:21.439686 1642337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 08:46:21.504107 1642337 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-08-04 08:46:21.492619948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 08:46:21.504237 1642337 docker.go:318] overlay module found
	I0804 08:46:21.506933 1642337 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0804 08:46:21.508125 1642337 start.go:304] selected driver: docker
	I0804 08:46:21.508145 1642337 start.go:918] validating driver "docker" against &{Name:functional-114794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.3 ClusterName:functional-114794 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.33.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 08:46:21.508323 1642337 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 08:46:21.510730 1642337 out.go:201] 
	W0804 08:46:21.511873 1642337 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0804 08:46:21.513041 1642337 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-114794 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-114794 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-hlgxf" [e415239e-23e7-48a5-99c4-026fe88e6813] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-hlgxf" [e415239e-23e7-48a5-99c4-026fe88e6813] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.003358363s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:31750
functional_test.go:1692: http://192.168.49.2:31750: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-hlgxf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31750
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (52.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [bd3ea051-45c7-4f0e-b23e-687f38c3f3a9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003794773s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-114794 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-114794 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-114794 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-114794 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2d589359-53d2-4252-8c1c-e89ee0444752] Pending
helpers_test.go:344: "sp-pod" [2d589359-53d2-4252-8c1c-e89ee0444752] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2d589359-53d2-4252-8c1c-e89ee0444752] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.003805583s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-114794 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-114794 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-114794 delete -f testdata/storage-provisioner/pod.yaml: (2.349846476s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-114794 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dcec8f52-c596-4c6a-a66d-c029edbb16aa] Pending
helpers_test.go:344: "sp-pod" [dcec8f52-c596-4c6a-a66d-c029edbb16aa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dcec8f52-c596-4c6a-a66d-c029edbb16aa] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.003048227s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-114794 exec sp-pod -- ls /tmp/mount
2025/08/04 08:46:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (52.20s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh -n functional-114794 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 cp functional-114794:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1778629893/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh -n functional-114794 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh -n functional-114794 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-114794 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-x8mtv" [698ca1c5-6a34-4852-b066-325320c933d7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-x8mtv" [698ca1c5-6a34-4852-b066-325320c933d7] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.049528143s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-114794 exec mysql-58ccfd96bb-x8mtv -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-114794 exec mysql-58ccfd96bb-x8mtv -- mysql -ppassword -e "show databases;": exit status 1 (107.589114ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0804 08:46:20.170683 1582690 retry.go:31] will retry after 1.01774609s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-114794 exec mysql-58ccfd96bb-x8mtv -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-114794 exec mysql-58ccfd96bb-x8mtv -- mysql -ppassword -e "show databases;": exit status 1 (116.347779ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0804 08:46:21.305614 1582690 retry.go:31] will retry after 780.149882ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-114794 exec mysql-58ccfd96bb-x8mtv -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-114794 exec mysql-58ccfd96bb-x8mtv -- mysql -ppassword -e "show databases;": exit status 1 (280.776606ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0804 08:46:22.367358 1582690 retry.go:31] will retry after 1.474050304s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-114794 exec mysql-58ccfd96bb-x8mtv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.26s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/1582690/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "sudo cat /etc/test/nested/copy/1582690/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/1582690.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "sudo cat /etc/ssl/certs/1582690.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/1582690.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "sudo cat /usr/share/ca-certificates/1582690.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/15826902.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "sudo cat /etc/ssl/certs/15826902.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/15826902.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "sudo cat /usr/share/ca-certificates/15826902.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-114794 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-114794 ssh "sudo systemctl is-active crio": exit status 1 (242.125469ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-114794 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-114794 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-pgp2n" [0c96b98d-22ad-461e-86f0-3335917855db] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-pgp2n" [0c96b98d-22ad-461e-86f0-3335917855db] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003333792s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-114794 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-114794 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-114794 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-114794 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1634180: os: process already finished
helpers_test.go:502: unable to terminate pid 1633758: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-114794 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-114794 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [6296d8cf-1b30-419a-b40d-e6caf424edd0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [6296d8cf-1b30-419a-b40d-e6caf424edd0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.036528801s
I0804 08:45:56.631487 1582690 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 service list -o json
functional_test.go:1511: Took "465.290887ms" to run "out/minikube-linux-amd64 -p functional-114794 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:31772
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:31772
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-114794 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.33.3
registry.k8s.io/kube-proxy:v1.33.3
registry.k8s.io/kube-controller-manager:v1.33.3
registry.k8s.io/kube-apiserver:v1.33.3
registry.k8s.io/etcd:3.5.21-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.12.0
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-114794
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:functional-114794
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-114794 image ls --format short --alsologtostderr:
I0804 08:46:22.958774 1642844 out.go:345] Setting OutFile to fd 1 ...
I0804 08:46:22.959324 1642844 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 08:46:22.959338 1642844 out.go:358] Setting ErrFile to fd 2...
I0804 08:46:22.959346 1642844 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 08:46:22.959829 1642844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
I0804 08:46:22.960865 1642844 config.go:182] Loaded profile config "functional-114794": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
I0804 08:46:22.961007 1642844 config.go:182] Loaded profile config "functional-114794": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
I0804 08:46:22.961494 1642844 cli_runner.go:164] Run: docker container inspect functional-114794 --format={{.State.Status}}
I0804 08:46:22.982525 1642844 ssh_runner.go:195] Run: systemctl --version
I0804 08:46:22.982580 1642844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-114794
I0804 08:46:23.003230 1642844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-114794/id_rsa Username:docker}
I0804 08:46:23.098428 1642844 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-114794 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                     │ alpine            │ d6adbc7fd47ec │ 52.5MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.0           │ 1cf5f116067c6 │ 70.1MB │
│ registry.k8s.io/pause                       │ 3.10              │ 873ed75102791 │ 736kB  │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/kube-apiserver              │ v1.33.3           │ a92b4b92a9916 │ 102MB  │
│ docker.io/library/mysql                     │ 5.7               │ 5107333e08a87 │ 501MB  │
│ docker.io/kicbase/echo-server               │ functional-114794 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kubernetesui/metrics-scraper      │ <none>            │ 115053965e86b │ 43.8MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/kube-scheduler              │ v1.33.3           │ 41376797d5122 │ 73.4MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ registry.k8s.io/echoserver                  │ 1.8               │ 82e4c8a736a4f │ 95.4MB │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-proxy                  │ v1.33.3           │ af855adae7960 │ 97.9MB │
│ registry.k8s.io/kube-controller-manager     │ v1.33.3           │ bf97fadcef430 │ 94.6MB │
│ docker.io/library/nginx                     │ latest            │ 2cd1d97f893f7 │ 192MB  │
│ registry.k8s.io/etcd                        │ 3.5.21-0          │ 499038711c081 │ 153MB  │
│ docker.io/library/minikube-local-cache-test │ functional-114794 │ 797de983819df │ 30B    │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-114794 image ls --format table --alsologtostderr:
I0804 08:46:24.350078 1643240 out.go:345] Setting OutFile to fd 1 ...
I0804 08:46:24.350300 1643240 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 08:46:24.350308 1643240 out.go:358] Setting ErrFile to fd 2...
I0804 08:46:24.350311 1643240 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 08:46:24.350505 1643240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
I0804 08:46:24.351036 1643240 config.go:182] Loaded profile config "functional-114794": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
I0804 08:46:24.351142 1643240 config.go:182] Loaded profile config "functional-114794": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
I0804 08:46:24.351469 1643240 cli_runner.go:164] Run: docker container inspect functional-114794 --format={{.State.Status}}
I0804 08:46:24.369920 1643240 ssh_runner.go:195] Run: systemctl --version
I0804 08:46:24.370006 1643240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-114794
I0804 08:46:24.390510 1643240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-114794/id_rsa Username:docker}
I0804 08:46:24.487374 1643240 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-114794 image ls --format json --alsologtostderr:
[{"id":"2cd1d97f893f70cee86a38b7160c30e5750f3ed6ad86c598884ca9c6a563a501","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"797de983819df1053ceac23ac86649e60f6b5d9fc16abed56c6b616579ffe3df","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-114794"],"size":"30"},{"id":"af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.33.3"],"size":"97900000"},{"id":"1cf5f116067c67da67f97bff78c4bbc76913f590
57c18627b96facaced73ea0b","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.0"],"size":"70100000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.33.3"],"size":"73400000"},{"id":"bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.33.3"],"size":"94600000"},{"id":"499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.21-0"],"size":"153000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/ku
bernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.33.3"],"size":"102000000"},{"id":"d6adbc7fd47ec44ff968ea826c84f41d0d5a70a2dce4bd030757f9b7fe9040b8","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52500000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-114794"],"size":"4940000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-114794 image ls --format json --alsologtostderr:
I0804 08:46:24.118948 1643146 out.go:345] Setting OutFile to fd 1 ...
I0804 08:46:24.119240 1643146 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 08:46:24.119252 1643146 out.go:358] Setting ErrFile to fd 2...
I0804 08:46:24.119256 1643146 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 08:46:24.119493 1643146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
I0804 08:46:24.120232 1643146 config.go:182] Loaded profile config "functional-114794": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
I0804 08:46:24.120379 1643146 config.go:182] Loaded profile config "functional-114794": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
I0804 08:46:24.120781 1643146 cli_runner.go:164] Run: docker container inspect functional-114794 --format={{.State.Status}}
I0804 08:46:24.138451 1643146 ssh_runner.go:195] Run: systemctl --version
I0804 08:46:24.138497 1643146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-114794
I0804 08:46:24.154305 1643146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-114794/id_rsa Username:docker}
I0804 08:46:24.266310 1643146 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-114794 image ls --format yaml --alsologtostderr:
- id: af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.33.3
size: "97900000"
- id: 2cd1d97f893f70cee86a38b7160c30e5750f3ed6ad86c598884ca9c6a563a501
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.21-0
size: "153000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-114794
size: "4940000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.33.3
size: "102000000"
- id: 1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.0
size: "70100000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 797de983819df1053ceac23ac86649e60f6b5d9fc16abed56c6b616579ffe3df
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-114794
size: "30"
- id: 41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.33.3
size: "73400000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.33.3
size: "94600000"
- id: d6adbc7fd47ec44ff968ea826c84f41d0d5a70a2dce4bd030757f9b7fe9040b8
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52500000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-114794 image ls --format yaml --alsologtostderr:
I0804 08:46:23.179868 1642892 out.go:345] Setting OutFile to fd 1 ...
I0804 08:46:23.180197 1642892 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 08:46:23.180209 1642892 out.go:358] Setting ErrFile to fd 2...
I0804 08:46:23.180213 1642892 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 08:46:23.180466 1642892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
I0804 08:46:23.181109 1642892 config.go:182] Loaded profile config "functional-114794": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
I0804 08:46:23.181220 1642892 config.go:182] Loaded profile config "functional-114794": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
I0804 08:46:23.181824 1642892 cli_runner.go:164] Run: docker container inspect functional-114794 --format={{.State.Status}}
I0804 08:46:23.201292 1642892 ssh_runner.go:195] Run: systemctl --version
I0804 08:46:23.201356 1642892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-114794
I0804 08:46:23.221615 1642892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-114794/id_rsa Username:docker}
I0804 08:46:23.326423 1642892 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-114794 ssh pgrep buildkitd: exit status 1 (267.570475ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 image build -t localhost/my-image:functional-114794 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-114794 image build -t localhost/my-image:functional-114794 testdata/build --alsologtostderr: (4.664806009s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-114794 image build -t localhost/my-image:functional-114794 testdata/build --alsologtostderr:
I0804 08:46:23.689284 1643048 out.go:345] Setting OutFile to fd 1 ...
I0804 08:46:23.689401 1643048 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 08:46:23.689409 1643048 out.go:358] Setting ErrFile to fd 2...
I0804 08:46:23.689415 1643048 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 08:46:23.689646 1643048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
I0804 08:46:23.690181 1643048 config.go:182] Loaded profile config "functional-114794": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
I0804 08:46:23.691226 1643048 config.go:182] Loaded profile config "functional-114794": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
I0804 08:46:23.692438 1643048 cli_runner.go:164] Run: docker container inspect functional-114794 --format={{.State.Status}}
I0804 08:46:23.713867 1643048 ssh_runner.go:195] Run: systemctl --version
I0804 08:46:23.713923 1643048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-114794
I0804 08:46:23.736023 1643048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-114794/id_rsa Username:docker}
I0804 08:46:23.861894 1643048 build_images.go:161] Building image from path: /tmp/build.582689566.tar
I0804 08:46:23.861964 1643048 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0804 08:46:23.872447 1643048 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.582689566.tar
I0804 08:46:23.876424 1643048 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.582689566.tar: stat -c "%s %y" /var/lib/minikube/build/build.582689566.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.582689566.tar': No such file or directory
I0804 08:46:23.876454 1643048 ssh_runner.go:362] scp /tmp/build.582689566.tar --> /var/lib/minikube/build/build.582689566.tar (3072 bytes)
I0804 08:46:23.904297 1643048 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.582689566
I0804 08:46:23.913839 1643048 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.582689566 -xf /var/lib/minikube/build/build.582689566.tar
I0804 08:46:23.964772 1643048 docker.go:373] Building image: /var/lib/minikube/build/build.582689566
I0804 08:46:23.964849 1643048 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-114794 /var/lib/minikube/build/build.582689566
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.0s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 1.0s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:0642d009c0b4889db32412fe8467b3cda497dfabc19cdff87c7d6f9c7242d23f done
#8 naming to localhost/my-image:functional-114794 done
#8 DONE 0.0s
I0804 08:46:28.282638 1643048 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-114794 /var/lib/minikube/build/build.582689566: (4.31776289s)
I0804 08:46:28.282721 1643048 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.582689566
I0804 08:46:28.291178 1643048 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.582689566.tar
I0804 08:46:28.299330 1643048 build_images.go:217] Built localhost/my-image:functional-114794 from /tmp/build.582689566.tar
I0804 08:46:28.299360 1643048 build_images.go:133] succeeded building to: functional-114794
I0804 08:46:28.299365 1643048 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (3.529716149s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-114794
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-114794 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.4.181 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-114794 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-114794 docker-env) && out/minikube-linux-amd64 status -p functional-114794"
functional_test.go:539: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-114794 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 image load --daemon kicbase/echo-server:functional-114794 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 image load --daemon kicbase/echo-server:functional-114794 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:252: (dbg) Done: docker pull kicbase/echo-server:latest: (1.679313739s)
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-114794
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 image load --daemon kicbase/echo-server:functional-114794 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "298.956717ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "50.093141ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "301.003774ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "48.6163ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (17.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-114794 /tmp/TestFunctionalparallelMountCmdany-port3945523767/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1754297160172726979" to /tmp/TestFunctionalparallelMountCmdany-port3945523767/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1754297160172726979" to /tmp/TestFunctionalparallelMountCmdany-port3945523767/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1754297160172726979" to /tmp/TestFunctionalparallelMountCmdany-port3945523767/001/test-1754297160172726979
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-114794 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (254.751684ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0804 08:46:00.427769 1582690 retry.go:31] will retry after 452.843416ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  4 08:46 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  4 08:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  4 08:46 test-1754297160172726979
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh cat /mount-9p/test-1754297160172726979
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-114794 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ddcb9d99-8401-4216-a3d5-cb6402f8b33a] Pending
helpers_test.go:344: "busybox-mount" [ddcb9d99-8401-4216-a3d5-cb6402f8b33a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ddcb9d99-8401-4216-a3d5-cb6402f8b33a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ddcb9d99-8401-4216-a3d5-cb6402f8b33a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 15.00346769s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-114794 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-114794 /tmp/TestFunctionalparallelMountCmdany-port3945523767/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (17.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 image save kicbase/echo-server:functional-114794 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 image rm kicbase/echo-server:functional-114794 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-114794
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 image save --daemon kicbase/echo-server:functional-114794 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-114794
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-114794 /tmp/TestFunctionalparallelMountCmdspecific-port2901498052/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-114794 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (380.232747ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0804 08:46:18.069658 1582690 retry.go:31] will retry after 493.388885ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-114794 /tmp/TestFunctionalparallelMountCmdspecific-port2901498052/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-114794 ssh "sudo umount -f /mount-9p": exit status 1 (249.88763ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-114794 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-114794 /tmp/TestFunctionalparallelMountCmdspecific-port2901498052/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-114794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2057398278/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-114794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2057398278/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-114794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2057398278/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-114794 ssh "findmnt -T" /mount1: exit status 1 (321.620572ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0804 08:46:19.869482 1582690 retry.go:31] will retry after 699.951018ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-114794 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-114794 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-114794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2057398278/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-114794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2057398278/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-114794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2057398278/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.80s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-114794
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-114794
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-114794
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/21223-1578987/.minikube/files/etc/test/nested/copy/1582690/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/add_remote (2.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/add_remote (2.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/add_local (2.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0serialCacheCmdcacheadd_local3695974339/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 cache add minikube-local-cache-test:functional-699837
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-699837 cache add minikube-local-cache-test:functional-699837: (2.048174911s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 cache delete minikube-local-cache-test:functional-699837
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-699837
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/add_local (2.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/cache_reload (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-699837 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (264.540477ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/cache_reload (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/LogsCmd (0.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/LogsCmd (0.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/LogsFileCmd (0.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0serialLogsFileCmd3307613832/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/serial/LogsFileCmd (0.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-699837 config get cpus: exit status 14 (56.526934ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-699837 config get cpus: exit status 14 (53.352766ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-699837 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-699837 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0: exit status 23 (157.204636ms)

                                                
                                                
-- stdout --
	* [functional-699837] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 09:14:12.827506 1684367 out.go:345] Setting OutFile to fd 1 ...
	I0804 09:14:12.827838 1684367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:14:12.827852 1684367 out.go:358] Setting ErrFile to fd 2...
	I0804 09:14:12.827859 1684367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:14:12.828159 1684367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 09:14:12.828853 1684367 out.go:352] Setting JSON to false
	I0804 09:14:12.830125 1684367 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":150942,"bootTime":1754147911,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 09:14:12.830199 1684367 start.go:140] virtualization: kvm guest
	I0804 09:14:12.832054 1684367 out.go:177] * [functional-699837] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0804 09:14:12.833657 1684367 notify.go:220] Checking for updates...
	I0804 09:14:12.834026 1684367 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 09:14:12.835281 1684367 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 09:14:12.836386 1684367 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 09:14:12.837430 1684367 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 09:14:12.838289 1684367 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 09:14:12.839199 1684367 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 09:14:12.840706 1684367 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:14:12.841301 1684367 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 09:14:12.867852 1684367 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 09:14:12.868009 1684367 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:14:12.922738 1684367 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-08-04 09:14:12.913268052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:14:12.922830 1684367 docker.go:318] overlay module found
	I0804 09:14:12.924048 1684367 out.go:177] * Using the docker driver based on existing profile
	I0804 09:14:12.925197 1684367 start.go:304] selected driver: docker
	I0804 09:14:12.925218 1684367 start.go:918] validating driver "docker" against &{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:14:12.925347 1684367 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 09:14:12.927194 1684367 out.go:201] 
	W0804 09:14:12.928160 1684367 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0804 09:14:12.929023 1684367 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-699837 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-699837 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-699837 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0-beta.0: exit status 23 (152.225303ms)

                                                
                                                
-- stdout --
	* [functional-699837] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 09:14:11.715923 1683521 out.go:345] Setting OutFile to fd 1 ...
	I0804 09:14:11.716018 1683521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:14:11.716023 1683521 out.go:358] Setting ErrFile to fd 2...
	I0804 09:14:11.716027 1683521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:14:11.716326 1683521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 09:14:11.716892 1683521 out.go:352] Setting JSON to false
	I0804 09:14:11.717934 1683521 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":150941,"bootTime":1754147911,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 09:14:11.718042 1683521 start.go:140] virtualization: kvm guest
	I0804 09:14:11.719821 1683521 out.go:177] * [functional-699837] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0804 09:14:11.720835 1683521 out.go:177]   - MINIKUBE_LOCATION=21223
	I0804 09:14:11.720869 1683521 notify.go:220] Checking for updates...
	I0804 09:14:11.722767 1683521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 09:14:11.723980 1683521 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	I0804 09:14:11.724962 1683521 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	I0804 09:14:11.725977 1683521 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 09:14:11.726884 1683521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 09:14:11.728212 1683521 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
	I0804 09:14:11.728808 1683521 driver.go:416] Setting default libvirt URI to qemu:///system
	I0804 09:14:11.754162 1683521 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0804 09:14:11.754269 1683521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:14:11.807011 1683521 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:58 SystemTime:2025-08-04 09:14:11.797857069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:14:11.807161 1683521 docker.go:318] overlay module found
	I0804 09:14:11.808723 1683521 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0804 09:14:11.809672 1683521 start.go:304] selected driver: docker
	I0804 09:14:11.809692 1683521 start.go:918] validating driver "docker" against &{Name:functional-699837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1753871403-21198@sha256:df7d018c3a6a26c5bb83a41102cf6ee056f62471011edba5d602d02edb5f5d1d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-beta.0 ClusterName:functional-699837 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 09:14:11.809804 1683521 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 09:14:11.812046 1683521 out.go:201] 
	W0804 09:14:11.813123 1683521 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0804 09:14:11.814088 1683521 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/CpCmd (1.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh -n functional-699837 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 cp functional-699837:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelCpCmd4180608053/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh -n functional-699837 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh -n functional-699837 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/CpCmd (1.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/1582690/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "sudo cat /etc/test/nested/copy/1582690/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/CertSync (1.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/1582690.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "sudo cat /etc/ssl/certs/1582690.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/1582690.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "sudo cat /usr/share/ca-certificates/1582690.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/15826902.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "sudo cat /etc/ssl/certs/15826902.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/15826902.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "sudo cat /usr/share/ca-certificates/15826902.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/CertSync (1.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/NonActiveRuntimeDisabled (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-699837 ssh "sudo systemctl is-active crio": exit status 1 (300.363034ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/NonActiveRuntimeDisabled (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/License (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/License (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "320.88376ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "53.952641ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "341.347382ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "71.384097ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-699837 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0-beta.0
registry.k8s.io/kube-proxy:v1.34.0-beta.0
registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
registry.k8s.io/kube-apiserver:v1.34.0-beta.0
registry.k8s.io/etcd:3.6.1-1
registry.k8s.io/etcd:3.5.21-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-699837
docker.io/kicbase/echo-server:functional-699837
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-699837 image ls --format short --alsologtostderr:
I0804 09:14:18.984009 1689533 out.go:345] Setting OutFile to fd 1 ...
I0804 09:14:18.984325 1689533 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 09:14:18.984386 1689533 out.go:358] Setting ErrFile to fd 2...
I0804 09:14:18.984400 1689533 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 09:14:18.984622 1689533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
I0804 09:14:18.985276 1689533 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
I0804 09:14:18.985443 1689533 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
I0804 09:14:18.985957 1689533 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
I0804 09:14:19.009329 1689533 ssh_runner.go:195] Run: systemctl --version
I0804 09:14:19.009388 1689533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
I0804 09:14:19.027402 1689533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
I0804 09:14:19.118183 1689533 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-699837 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ docker.io/library/minikube-local-cache-test │ functional-699837 │ 797de983819df │ 30B    │
│ registry.k8s.io/kube-apiserver              │ v1.34.0-beta.0    │ d85eea91cc41d │ 85.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.0-beta.0    │ 21d34a2aeacf5 │ 51.2MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0-beta.0    │ 9ad783615e1bc │ 73.1MB │
│ registry.k8s.io/etcd                        │ 3.5.21-0          │ 499038711c081 │ 153MB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ registry.k8s.io/pause                       │ 3.10              │ 873ed75102791 │ 736kB  │
│ docker.io/kicbase/echo-server               │ functional-699837 │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.0-beta.0    │ c3709a85b683d │ 70.7MB │
│ registry.k8s.io/etcd                        │ 3.6.1-1           │ 1e30c0b1e9b99 │ 195MB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-699837 image ls --format table --alsologtostderr:
I0804 09:14:19.302994 1689748 out.go:345] Setting OutFile to fd 1 ...
I0804 09:14:19.303333 1689748 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 09:14:19.303344 1689748 out.go:358] Setting ErrFile to fd 2...
I0804 09:14:19.303348 1689748 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 09:14:19.303543 1689748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
I0804 09:14:19.304154 1689748 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
I0804 09:14:19.304260 1689748 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
I0804 09:14:19.304656 1689748 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
I0804 09:14:19.321366 1689748 ssh_runner.go:195] Run: systemctl --version
I0804 09:14:19.321421 1689748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
I0804 09:14:19.338113 1689748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
I0804 09:14:19.425705 1689748 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-699837 image ls --format json --alsologtostderr:
[{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"9ad783615e1bcab361c82a9806b5005b33be3f6aa181043df837a10d1e523451","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0-beta.0"],"size":"73100000"},{"id":"1e30c0b1e9b99661d763456c0194cfa70e04ad7cdb9aa70b6b418088ee3d7da6","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.1-1"],"size":"195000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-699837"],"size":"4940000"},{"id":"0184c1613d92931126feb4c5
48e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"797de983819df1053ceac23ac86649e60f6b5d9fc16abed56c6b616579ffe3df","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-699837"],"size":"30"},{"id":"d85eea91cc41d02b12e6ee2ad012006130cd8674faf51465c6d28a98448d8531","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0-beta.0"],"size":"85700000"},{"id":"21d34a2aeacf50a8e47e77c972881726a216b817bbb276ea0f3c72200a4c5981","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0-beta.0"],"size":"51200000"},{"id":"c3709a85b683daaf3cdc79801e6f4718a0d57414e0238f231227818abd98f6bf","repoDigests":[
],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0-beta.0"],"size":"70700000"},{"id":"499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.21-0"],"size":"153000000"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-699837 image ls --format json --alsologtostderr:
I0804 09:14:19.509515 1689853 out.go:345] Setting OutFile to fd 1 ...
I0804 09:14:19.509799 1689853 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 09:14:19.509810 1689853 out.go:358] Setting ErrFile to fd 2...
I0804 09:14:19.509814 1689853 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 09:14:19.509987 1689853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
I0804 09:14:19.510540 1689853 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
I0804 09:14:19.510637 1689853 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
I0804 09:14:19.511023 1689853 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
I0804 09:14:19.529743 1689853 ssh_runner.go:195] Run: systemctl --version
I0804 09:14:19.529788 1689853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
I0804 09:14:19.546375 1689853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
I0804 09:14:19.633378 1689853 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-699837 image ls --format yaml --alsologtostderr:
- id: 21d34a2aeacf50a8e47e77c972881726a216b817bbb276ea0f3c72200a4c5981
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0-beta.0
size: "51200000"
- id: 9ad783615e1bcab361c82a9806b5005b33be3f6aa181043df837a10d1e523451
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0-beta.0
size: "73100000"
- id: c3709a85b683daaf3cdc79801e6f4718a0d57414e0238f231227818abd98f6bf
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0-beta.0
size: "70700000"
- id: 499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.21-0
size: "153000000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-699837
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 797de983819df1053ceac23ac86649e60f6b5d9fc16abed56c6b616579ffe3df
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-699837
size: "30"
- id: 1e30c0b1e9b99661d763456c0194cfa70e04ad7cdb9aa70b6b418088ee3d7da6
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.1-1
size: "195000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d85eea91cc41d02b12e6ee2ad012006130cd8674faf51465c6d28a98448d8531
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0-beta.0
size: "85700000"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-699837 image ls --format yaml --alsologtostderr:
I0804 09:14:19.087247 1689624 out.go:345] Setting OutFile to fd 1 ...
I0804 09:14:19.087357 1689624 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 09:14:19.087370 1689624 out.go:358] Setting ErrFile to fd 2...
I0804 09:14:19.087377 1689624 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 09:14:19.087544 1689624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
I0804 09:14:19.088149 1689624 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
I0804 09:14:19.088249 1689624 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
I0804 09:14:19.088592 1689624 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
I0804 09:14:19.105797 1689624 ssh_runner.go:195] Run: systemctl --version
I0804 09:14:19.105861 1689624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
I0804 09:14:19.123677 1689624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
I0804 09:14:19.209826 1689624 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageBuild (4.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-699837 ssh pgrep buildkitd: exit status 1 (247.46824ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 image build -t localhost/my-image:functional-699837 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-699837 image build -t localhost/my-image:functional-699837 testdata/build --alsologtostderr: (4.484576182s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-699837 image build -t localhost/my-image:functional-699837 testdata/build --alsologtostderr:
I0804 09:14:19.441061 1689818 out.go:345] Setting OutFile to fd 1 ...
I0804 09:14:19.441527 1689818 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 09:14:19.441542 1689818 out.go:358] Setting ErrFile to fd 2...
I0804 09:14:19.441550 1689818 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0804 09:14:19.441772 1689818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
I0804 09:14:19.442353 1689818 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
I0804 09:14:19.442944 1689818 config.go:182] Loaded profile config "functional-699837": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0-beta.0
I0804 09:14:19.443400 1689818 cli_runner.go:164] Run: docker container inspect functional-699837 --format={{.State.Status}}
I0804 09:14:19.461418 1689818 ssh_runner.go:195] Run: systemctl --version
I0804 09:14:19.461479 1689818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-699837
I0804 09:14:19.478454 1689818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/functional-699837/id_rsa Username:docker}
I0804 09:14:19.569360 1689818 build_images.go:161] Building image from path: /tmp/build.435913752.tar
I0804 09:14:19.569429 1689818 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0804 09:14:19.577832 1689818 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.435913752.tar
I0804 09:14:19.580803 1689818 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.435913752.tar: stat -c "%s %y" /var/lib/minikube/build/build.435913752.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.435913752.tar': No such file or directory
I0804 09:14:19.580824 1689818 ssh_runner.go:362] scp /tmp/build.435913752.tar --> /var/lib/minikube/build/build.435913752.tar (3072 bytes)
I0804 09:14:19.602311 1689818 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.435913752
I0804 09:14:19.609921 1689818 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.435913752 -xf /var/lib/minikube/build/build.435913752.tar
I0804 09:14:19.618029 1689818 docker.go:373] Building image: /var/lib/minikube/build/build.435913752
I0804 09:14:19.618107 1689818 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-699837 /var/lib/minikube/build/build.435913752
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.0s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 1.1s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:8dad25cf24b348f43aad3ab402723ed56306a41411edd7b24469a0e0046e25ba done
#8 naming to localhost/my-image:functional-699837 done
#8 DONE 0.0s
I0804 09:14:23.854117 1689818 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-699837 /var/lib/minikube/build/build.435913752: (4.235984033s)
I0804 09:14:23.854171 1689818 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.435913752
I0804 09:14:23.862305 1689818 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.435913752.tar
I0804 09:14:23.870056 1689818 build_images.go:217] Built localhost/my-image:functional-699837 from /tmp/build.435913752.tar
I0804 09:14:23.870095 1689818 build_images.go:133] succeeded building to: functional-699837
I0804 09:14:23.870102 1689818 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageBuild (4.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/Setup (1.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.776423452s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-699837
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/Setup (1.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MountCmd/specific-port (2.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdspecific-port2621928662/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-699837 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (334.325128ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0804 09:14:11.508365 1582690 retry.go:31] will retry after 593.288916ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdspecific-port2621928662/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-699837 ssh "sudo umount -f /mount-9p": exit status 1 (304.668053ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-699837 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdspecific-port2621928662/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MountCmd/specific-port (2.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 image load --daemon kicbase/echo-server:functional-699837 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 image load --daemon kicbase/echo-server:functional-699837 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MountCmd/VerifyCleanup (1.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdVerifyCleanup352349839/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdVerifyCleanup352349839/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdVerifyCleanup352349839/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-699837 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdVerifyCleanup352349839/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdVerifyCleanup352349839/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-699837 /tmp/TestFunctionalNewestKubernetesVersionv1.34.0-beta.0parallelMountCmdVerifyCleanup352349839/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/MountCmd/VerifyCleanup (1.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (2.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:252: (dbg) Done: docker pull kicbase/echo-server:latest: (1.700139648s)
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-699837
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 image load --daemon kicbase/echo-server:functional-699837 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (2.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 image save kicbase/echo-server:functional-699837 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-699837 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 image rm kicbase/echo-server:functional-699837 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-699837
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-699837 image save --daemon kicbase/echo-server:functional-699837 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-699837
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-699837 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-699837
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-699837
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-699837
--- PASS: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (101.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0804 09:19:03.491539 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:19:15.253416 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:19:15.259789 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:19:15.271144 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:19:15.292602 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:19:15.333992 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:19:15.415398 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:19:15.576923 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:19:15.898466 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:19:16.540416 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:19:17.822463 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:19:20.384314 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:19:25.505755 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:19:35.747981 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:19:56.229403 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-406687 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m40.722192921s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (101.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (38.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-406687 kubectl -- rollout status deployment/busybox: (3.697154584s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0804 09:20:03.866711 1582690 retry.go:31] will retry after 867.717461ms: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0804 09:20:04.851714 1582690 retry.go:31] will retry after 1.832425615s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0804 09:20:06.801747 1582690 retry.go:31] will retry after 2.78728134s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0804 09:20:09.707562 1582690 retry.go:31] will retry after 2.247438524s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0804 09:20:12.075026 1582690 retry.go:31] will retry after 4.834250273s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0804 09:20:17.029441 1582690 retry.go:31] will retry after 9.066949552s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0804 09:20:26.217361 1582690 retry.go:31] will retry after 9.758853283s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- exec busybox-58667487b6-ffn6s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- exec busybox-58667487b6-mf4l9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- exec busybox-58667487b6-nd2qn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- exec busybox-58667487b6-ffn6s -- nslookup kubernetes.default
E0804 09:20:37.191602 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- exec busybox-58667487b6-mf4l9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- exec busybox-58667487b6-nd2qn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- exec busybox-58667487b6-ffn6s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- exec busybox-58667487b6-mf4l9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- exec busybox-58667487b6-nd2qn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (38.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- exec busybox-58667487b6-ffn6s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- exec busybox-58667487b6-ffn6s -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- exec busybox-58667487b6-mf4l9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- exec busybox-58667487b6-mf4l9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- exec busybox-58667487b6-nd2qn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 kubectl -- exec busybox-58667487b6-nd2qn -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (14.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 node add --alsologtostderr -v 5
E0804 09:20:41.678508 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-406687 node add --alsologtostderr -v 5: (13.297458941s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (14.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-406687 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 status --output json --alsologtostderr -v 5
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp testdata/cp-test.txt ha-406687:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp ha-406687:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2676395180/001/cp-test_ha-406687.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp ha-406687:/home/docker/cp-test.txt ha-406687-m02:/home/docker/cp-test_ha-406687_ha-406687-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m02 "sudo cat /home/docker/cp-test_ha-406687_ha-406687-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp ha-406687:/home/docker/cp-test.txt ha-406687-m03:/home/docker/cp-test_ha-406687_ha-406687-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m03 "sudo cat /home/docker/cp-test_ha-406687_ha-406687-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp ha-406687:/home/docker/cp-test.txt ha-406687-m04:/home/docker/cp-test_ha-406687_ha-406687-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m04 "sudo cat /home/docker/cp-test_ha-406687_ha-406687-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp testdata/cp-test.txt ha-406687-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp ha-406687-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2676395180/001/cp-test_ha-406687-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp ha-406687-m02:/home/docker/cp-test.txt ha-406687:/home/docker/cp-test_ha-406687-m02_ha-406687.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687 "sudo cat /home/docker/cp-test_ha-406687-m02_ha-406687.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp ha-406687-m02:/home/docker/cp-test.txt ha-406687-m03:/home/docker/cp-test_ha-406687-m02_ha-406687-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m03 "sudo cat /home/docker/cp-test_ha-406687-m02_ha-406687-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp ha-406687-m02:/home/docker/cp-test.txt ha-406687-m04:/home/docker/cp-test_ha-406687-m02_ha-406687-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m04 "sudo cat /home/docker/cp-test_ha-406687-m02_ha-406687-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp testdata/cp-test.txt ha-406687-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp ha-406687-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2676395180/001/cp-test_ha-406687-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp ha-406687-m03:/home/docker/cp-test.txt ha-406687:/home/docker/cp-test_ha-406687-m03_ha-406687.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687 "sudo cat /home/docker/cp-test_ha-406687-m03_ha-406687.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp ha-406687-m03:/home/docker/cp-test.txt ha-406687-m02:/home/docker/cp-test_ha-406687-m03_ha-406687-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m02 "sudo cat /home/docker/cp-test_ha-406687-m03_ha-406687-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp ha-406687-m03:/home/docker/cp-test.txt ha-406687-m04:/home/docker/cp-test_ha-406687-m03_ha-406687-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m04 "sudo cat /home/docker/cp-test_ha-406687-m03_ha-406687-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp testdata/cp-test.txt ha-406687-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp ha-406687-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2676395180/001/cp-test_ha-406687-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp ha-406687-m04:/home/docker/cp-test.txt ha-406687:/home/docker/cp-test_ha-406687-m04_ha-406687.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687 "sudo cat /home/docker/cp-test_ha-406687-m04_ha-406687.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp ha-406687-m04:/home/docker/cp-test.txt ha-406687-m02:/home/docker/cp-test_ha-406687-m04_ha-406687-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m02 "sudo cat /home/docker/cp-test_ha-406687-m04_ha-406687-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 cp ha-406687-m04:/home/docker/cp-test.txt ha-406687-m03:/home/docker/cp-test_ha-406687-m04_ha-406687-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 ssh -n ha-406687-m03 "sudo cat /home/docker/cp-test_ha-406687-m04_ha-406687-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-406687 node stop m02 --alsologtostderr -v 5: (10.750143014s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406687 status --alsologtostderr -v 5: exit status 7 (641.672254ms)

                                                
                                                
-- stdout --
	ha-406687
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-406687-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-406687-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-406687-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 09:21:21.178420 1720389 out.go:345] Setting OutFile to fd 1 ...
	I0804 09:21:21.178542 1720389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:21:21.178555 1720389 out.go:358] Setting ErrFile to fd 2...
	I0804 09:21:21.178560 1720389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:21:21.178778 1720389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 09:21:21.178967 1720389 out.go:352] Setting JSON to false
	I0804 09:21:21.178996 1720389 mustload.go:65] Loading cluster: ha-406687
	I0804 09:21:21.179147 1720389 notify.go:220] Checking for updates...
	I0804 09:21:21.179394 1720389 config.go:182] Loaded profile config "ha-406687": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
	I0804 09:21:21.179414 1720389 status.go:174] checking status of ha-406687 ...
	I0804 09:21:21.179842 1720389 cli_runner.go:164] Run: docker container inspect ha-406687 --format={{.State.Status}}
	I0804 09:21:21.198357 1720389 status.go:371] ha-406687 host status = "Running" (err=<nil>)
	I0804 09:21:21.198382 1720389 host.go:66] Checking if "ha-406687" exists ...
	I0804 09:21:21.198717 1720389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-406687
	I0804 09:21:21.216288 1720389 host.go:66] Checking if "ha-406687" exists ...
	I0804 09:21:21.216605 1720389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 09:21:21.216661 1720389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-406687
	I0804 09:21:21.233780 1720389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/ha-406687/id_rsa Username:docker}
	I0804 09:21:21.326228 1720389 ssh_runner.go:195] Run: systemctl --version
	I0804 09:21:21.330301 1720389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 09:21:21.341371 1720389 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:21:21.390102 1720389 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:73 SystemTime:2025-08-04 09:21:21.381122747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:21:21.390865 1720389 kubeconfig.go:125] found "ha-406687" server: "https://192.168.49.254:8443"
	I0804 09:21:21.390903 1720389 api_server.go:166] Checking apiserver status ...
	I0804 09:21:21.390944 1720389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:21:21.402264 1720389 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2528/cgroup
	I0804 09:21:21.411135 1720389 api_server.go:182] apiserver freezer: "6:freezer:/docker/5c0ef4458c5f3441233b55809c3d6a496cdf735617e9c5d1a61c4433023ebeae/kubepods/burstable/pod4dad4c51c45bec3b69a84925b0895ac8/5c3e68452ee90db060d50a3688edb894dacf6e32f515168cdc77f82ab97181e3"
	I0804 09:21:21.411187 1720389 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5c0ef4458c5f3441233b55809c3d6a496cdf735617e9c5d1a61c4433023ebeae/kubepods/burstable/pod4dad4c51c45bec3b69a84925b0895ac8/5c3e68452ee90db060d50a3688edb894dacf6e32f515168cdc77f82ab97181e3/freezer.state
	I0804 09:21:21.418843 1720389 api_server.go:204] freezer state: "THAWED"
	I0804 09:21:21.418870 1720389 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0804 09:21:21.422684 1720389 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0804 09:21:21.422711 1720389 status.go:463] ha-406687 apiserver status = Running (err=<nil>)
	I0804 09:21:21.422723 1720389 status.go:176] ha-406687 status: &{Name:ha-406687 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 09:21:21.422737 1720389 status.go:174] checking status of ha-406687-m02 ...
	I0804 09:21:21.423012 1720389 cli_runner.go:164] Run: docker container inspect ha-406687-m02 --format={{.State.Status}}
	I0804 09:21:21.440306 1720389 status.go:371] ha-406687-m02 host status = "Stopped" (err=<nil>)
	I0804 09:21:21.440327 1720389 status.go:384] host is not running, skipping remaining checks
	I0804 09:21:21.440335 1720389 status.go:176] ha-406687-m02 status: &{Name:ha-406687-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 09:21:21.440357 1720389 status.go:174] checking status of ha-406687-m03 ...
	I0804 09:21:21.440585 1720389 cli_runner.go:164] Run: docker container inspect ha-406687-m03 --format={{.State.Status}}
	I0804 09:21:21.457102 1720389 status.go:371] ha-406687-m03 host status = "Running" (err=<nil>)
	I0804 09:21:21.457123 1720389 host.go:66] Checking if "ha-406687-m03" exists ...
	I0804 09:21:21.457424 1720389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-406687-m03
	I0804 09:21:21.475054 1720389 host.go:66] Checking if "ha-406687-m03" exists ...
	I0804 09:21:21.475341 1720389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 09:21:21.475396 1720389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-406687-m03
	I0804 09:21:21.491503 1720389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/ha-406687-m03/id_rsa Username:docker}
	I0804 09:21:21.578136 1720389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 09:21:21.588531 1720389 kubeconfig.go:125] found "ha-406687" server: "https://192.168.49.254:8443"
	I0804 09:21:21.588557 1720389 api_server.go:166] Checking apiserver status ...
	I0804 09:21:21.588589 1720389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:21:21.598641 1720389 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2526/cgroup
	I0804 09:21:21.606688 1720389 api_server.go:182] apiserver freezer: "6:freezer:/docker/23c912905ba821def33485c4d169ca96a8f9e5df44b5a2853a7bf30c2f38142d/kubepods/burstable/podfc69559661b67d3ebd36204dedbf5c01/50c026015b489db0e1b37786effd53c9b082396a13a1cd62a6fd357d42aa9b0f"
	I0804 09:21:21.606741 1720389 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/23c912905ba821def33485c4d169ca96a8f9e5df44b5a2853a7bf30c2f38142d/kubepods/burstable/podfc69559661b67d3ebd36204dedbf5c01/50c026015b489db0e1b37786effd53c9b082396a13a1cd62a6fd357d42aa9b0f/freezer.state
	I0804 09:21:21.614887 1720389 api_server.go:204] freezer state: "THAWED"
	I0804 09:21:21.614921 1720389 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0804 09:21:21.619250 1720389 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0804 09:21:21.619272 1720389 status.go:463] ha-406687-m03 apiserver status = Running (err=<nil>)
	I0804 09:21:21.619283 1720389 status.go:176] ha-406687-m03 status: &{Name:ha-406687-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 09:21:21.619305 1720389 status.go:174] checking status of ha-406687-m04 ...
	I0804 09:21:21.619538 1720389 cli_runner.go:164] Run: docker container inspect ha-406687-m04 --format={{.State.Status}}
	I0804 09:21:21.637357 1720389 status.go:371] ha-406687-m04 host status = "Running" (err=<nil>)
	I0804 09:21:21.637382 1720389 host.go:66] Checking if "ha-406687-m04" exists ...
	I0804 09:21:21.637609 1720389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-406687-m04
	I0804 09:21:21.653448 1720389 host.go:66] Checking if "ha-406687-m04" exists ...
	I0804 09:21:21.653671 1720389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 09:21:21.653703 1720389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-406687-m04
	I0804 09:21:21.670067 1720389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/ha-406687-m04/id_rsa Username:docker}
	I0804 09:21:21.758028 1720389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 09:21:21.768266 1720389 status.go:176] ha-406687-m04 status: &{Name:ha-406687-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (36.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-406687 node start m02 --alsologtostderr -v 5: (35.879820043s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 status --alsologtostderr -v 5
E0804 09:21:59.113825 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (36.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (152.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-406687 stop --alsologtostderr -v 5: (33.177394393s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 start --wait true --alsologtostderr -v 5
E0804 09:23:44.752822 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:24:03.491190 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:24:15.254273 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-406687 start --wait true --alsologtostderr -v 5: (1m59.48014235s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (152.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-406687 node delete m03 --alsologtostderr -v 5: (8.50709986s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 stop --alsologtostderr -v 5
E0804 09:24:42.955854 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-406687 stop --alsologtostderr -v 5: (32.453794197s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406687 status --alsologtostderr -v 5: exit status 7 (104.223116ms)

                                                
                                                
-- stdout --
	ha-406687
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-406687-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-406687-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 09:25:15.406033 1754137 out.go:345] Setting OutFile to fd 1 ...
	I0804 09:25:15.406131 1754137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:25:15.406139 1754137 out.go:358] Setting ErrFile to fd 2...
	I0804 09:25:15.406144 1754137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:25:15.406324 1754137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 09:25:15.406490 1754137 out.go:352] Setting JSON to false
	I0804 09:25:15.406520 1754137 mustload.go:65] Loading cluster: ha-406687
	I0804 09:25:15.406571 1754137 notify.go:220] Checking for updates...
	I0804 09:25:15.407006 1754137 config.go:182] Loaded profile config "ha-406687": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
	I0804 09:25:15.407031 1754137 status.go:174] checking status of ha-406687 ...
	I0804 09:25:15.407500 1754137 cli_runner.go:164] Run: docker container inspect ha-406687 --format={{.State.Status}}
	I0804 09:25:15.424877 1754137 status.go:371] ha-406687 host status = "Stopped" (err=<nil>)
	I0804 09:25:15.424899 1754137 status.go:384] host is not running, skipping remaining checks
	I0804 09:25:15.424905 1754137 status.go:176] ha-406687 status: &{Name:ha-406687 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 09:25:15.424925 1754137 status.go:174] checking status of ha-406687-m02 ...
	I0804 09:25:15.425144 1754137 cli_runner.go:164] Run: docker container inspect ha-406687-m02 --format={{.State.Status}}
	I0804 09:25:15.443421 1754137 status.go:371] ha-406687-m02 host status = "Stopped" (err=<nil>)
	I0804 09:25:15.443458 1754137 status.go:384] host is not running, skipping remaining checks
	I0804 09:25:15.443473 1754137 status.go:176] ha-406687-m02 status: &{Name:ha-406687-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 09:25:15.443509 1754137 status.go:174] checking status of ha-406687-m04 ...
	I0804 09:25:15.443760 1754137 cli_runner.go:164] Run: docker container inspect ha-406687-m04 --format={{.State.Status}}
	I0804 09:25:15.460767 1754137 status.go:371] ha-406687-m04 host status = "Stopped" (err=<nil>)
	I0804 09:25:15.460785 1754137 status.go:384] host is not running, skipping remaining checks
	I0804 09:25:15.460791 1754137 status.go:176] ha-406687-m04 status: &{Name:ha-406687-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (91.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0804 09:25:41.681962 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-406687 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m30.542600752s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (91.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (26.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-406687 node add --control-plane --alsologtostderr -v 5: (25.214354605s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-406687 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (26.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (26.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-361421 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-361421 --driver=docker  --container-runtime=docker: (26.454691051s)
--- PASS: TestImageBuild/serial/Setup (26.45s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-361421
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-361421: (1.049211559s)
--- PASS: TestImageBuild/serial/NormalBuild (1.05s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-361421
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.65s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-361421
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.46s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.49s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-361421
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.49s)

                                                
                                    
x
+
TestJSONOutput/start/Command (71.27s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-403603 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
E0804 09:29:03.492245 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-403603 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m11.269751118s)
--- PASS: TestJSONOutput/start/Command (71.27s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-403603 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-403603 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-403603 --output=json --user=testUser
E0804 09:29:15.254231 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-403603 --output=json --user=testUser: (10.812394845s)
--- PASS: TestJSONOutput/stop/Command (10.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-111972 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-111972 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (63.605669ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a3cb313c-163a-4041-b511-d1a1586bf4d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-111972] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b1354ff3-d1fd-48f3-b1e9-c0ecda26e090","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21223"}}
	{"specversion":"1.0","id":"9a6c3dee-71ed-4204-960e-d289a4019445","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7ae8aac3-1cca-4108-9611-212214072179","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig"}}
	{"specversion":"1.0","id":"3f62a26f-f685-497f-9a0e-0f830f0a0a3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube"}}
	{"specversion":"1.0","id":"bb8ff0e5-4721-44a0-96d8-cc2e2354c8bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f982a701-c2a0-4a5c-aac2-74eeb0d56efa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0a6955a3-5602-4236-87ac-38e152d9f5ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-111972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-111972
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.27s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-112473 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-112473 --network=: (26.156335974s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-112473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-112473
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-112473: (2.095997741s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.27s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.82s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-638972 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-638972 --network=bridge: (24.874941167s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-638972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-638972
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-638972: (1.92282591s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.82s)

                                                
                                    
x
+
TestKicExistingNetwork (27.3s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0804 09:30:13.482813 1582690 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0804 09:30:13.498300 1582690 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0804 09:30:13.498377 1582690 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0804 09:30:13.498394 1582690 cli_runner.go:164] Run: docker network inspect existing-network
W0804 09:30:13.513451 1582690 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0804 09:30:13.513479 1582690 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0804 09:30:13.513492 1582690 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0804 09:30:13.513634 1582690 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0804 09:30:13.529464 1582690 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b4122743d943 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:66:3d:c4:8d:93} reservation:<nil>}
I0804 09:30:13.529864 1582690 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f4d6a0}
I0804 09:30:13.529902 1582690 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0804 09:30:13.529951 1582690 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0804 09:30:13.580408 1582690 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-430827 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-430827 --network=existing-network: (25.239132114s)
helpers_test.go:175: Cleaning up "existing-network-430827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-430827
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-430827: (1.935208213s)
I0804 09:30:40.771257 1582690 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (27.30s)

                                                
                                    
x
+
TestKicCustomSubnet (28.15s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-489239 --subnet=192.168.60.0/24
E0804 09:30:41.678152 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-489239 --subnet=192.168.60.0/24: (26.046993885s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-489239 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-489239" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-489239
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-489239: (2.079406129s)
--- PASS: TestKicCustomSubnet (28.15s)

                                                
                                    
x
+
TestKicStaticIP (27.67s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-017601 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-017601 --static-ip=192.168.200.200: (25.528281793s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-017601 ip
helpers_test.go:175: Cleaning up "static-ip-017601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-017601
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-017601: (2.017840335s)
--- PASS: TestKicStaticIP (27.67s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (56.94s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-041763 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-041763 --driver=docker  --container-runtime=docker: (26.110697448s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-056543 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-056543 --driver=docker  --container-runtime=docker: (25.511932816s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-041763
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-056543
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-056543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-056543
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-056543: (2.093403802s)
helpers_test.go:175: Cleaning up "first-041763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-041763
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-041763: (2.102733859s)
--- PASS: TestMinikubeProfile (56.94s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-042429 --memory=3072 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-042429 --memory=3072 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.564712064s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-042429 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-057602 --memory=3072 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-057602 --memory=3072 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.674171132s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-057602 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.44s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-042429 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-042429 --alsologtostderr -v=5: (1.436118212s)
--- PASS: TestMountStart/serial/DeleteFirst (1.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-057602 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-057602
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-057602: (1.171493042s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.02s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-057602
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-057602: (8.024722746s)
--- PASS: TestMountStart/serial/RestartStopped (9.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-057602 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (62.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-123428 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E0804 09:33:46.567856 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:34:03.491847 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-123428 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m1.948046468s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (62.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (58.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- rollout status deployment/busybox
E0804 09:34:15.254287 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-123428 -- rollout status deployment/busybox: (3.71535217s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0804 09:34:15.395636 1582690 retry.go:31] will retry after 1.092985148s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0804 09:34:16.605708 1582690 retry.go:31] will retry after 1.544957298s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0804 09:34:18.266223 1582690 retry.go:31] will retry after 1.162374632s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0804 09:34:19.544586 1582690 retry.go:31] will retry after 2.682676609s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0804 09:34:22.344015 1582690 retry.go:31] will retry after 3.136367489s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0804 09:34:25.596273 1582690 retry.go:31] will retry after 9.739093278s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0804 09:34:35.452471 1582690 retry.go:31] will retry after 10.321261493s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0804 09:34:45.892217 1582690 retry.go:31] will retry after 22.542024134s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- exec busybox-58667487b6-5kml2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- exec busybox-58667487b6-9dk7s -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- exec busybox-58667487b6-5kml2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- exec busybox-58667487b6-9dk7s -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- exec busybox-58667487b6-5kml2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- exec busybox-58667487b6-9dk7s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (58.85s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- exec busybox-58667487b6-5kml2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- exec busybox-58667487b6-5kml2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- exec busybox-58667487b6-9dk7s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123428 -- exec busybox-58667487b6-9dk7s -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (14.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-123428 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-123428 -v=5 --alsologtostderr: (13.524274265s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (14.12s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-123428 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 cp testdata/cp-test.txt multinode-123428:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 ssh -n multinode-123428 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 cp multinode-123428:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2495425463/001/cp-test_multinode-123428.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 ssh -n multinode-123428 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 cp multinode-123428:/home/docker/cp-test.txt multinode-123428-m02:/home/docker/cp-test_multinode-123428_multinode-123428-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 ssh -n multinode-123428 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 ssh -n multinode-123428-m02 "sudo cat /home/docker/cp-test_multinode-123428_multinode-123428-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 cp multinode-123428:/home/docker/cp-test.txt multinode-123428-m03:/home/docker/cp-test_multinode-123428_multinode-123428-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 ssh -n multinode-123428 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 ssh -n multinode-123428-m03 "sudo cat /home/docker/cp-test_multinode-123428_multinode-123428-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 cp testdata/cp-test.txt multinode-123428-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 ssh -n multinode-123428-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 cp multinode-123428-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2495425463/001/cp-test_multinode-123428-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 ssh -n multinode-123428-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 cp multinode-123428-m02:/home/docker/cp-test.txt multinode-123428:/home/docker/cp-test_multinode-123428-m02_multinode-123428.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 ssh -n multinode-123428-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 ssh -n multinode-123428 "sudo cat /home/docker/cp-test_multinode-123428-m02_multinode-123428.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 cp multinode-123428-m02:/home/docker/cp-test.txt multinode-123428-m03:/home/docker/cp-test_multinode-123428-m02_multinode-123428-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 ssh -n multinode-123428-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 ssh -n multinode-123428-m03 "sudo cat /home/docker/cp-test_multinode-123428-m02_multinode-123428-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 cp testdata/cp-test.txt multinode-123428-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 ssh -n multinode-123428-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 cp multinode-123428-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2495425463/001/cp-test_multinode-123428-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 ssh -n multinode-123428-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 cp multinode-123428-m03:/home/docker/cp-test.txt multinode-123428:/home/docker/cp-test_multinode-123428-m03_multinode-123428.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 ssh -n multinode-123428-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 ssh -n multinode-123428 "sudo cat /home/docker/cp-test_multinode-123428-m03_multinode-123428.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 cp multinode-123428-m03:/home/docker/cp-test.txt multinode-123428-m02:/home/docker/cp-test_multinode-123428-m03_multinode-123428-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 ssh -n multinode-123428-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 ssh -n multinode-123428-m02 "sudo cat /home/docker/cp-test_multinode-123428-m03_multinode-123428-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-123428 node stop m03: (1.172979166s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-123428 status: exit status 7 (460.093192ms)

                                                
                                                
-- stdout --
	multinode-123428
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-123428-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-123428-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-123428 status --alsologtostderr: exit status 7 (455.658464ms)

                                                
                                                
-- stdout --
	multinode-123428
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-123428-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-123428-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 09:35:36.747121 1845817 out.go:345] Setting OutFile to fd 1 ...
	I0804 09:35:36.747277 1845817 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:35:36.747292 1845817 out.go:358] Setting ErrFile to fd 2...
	I0804 09:35:36.747297 1845817 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:35:36.747508 1845817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 09:35:36.747679 1845817 out.go:352] Setting JSON to false
	I0804 09:35:36.747710 1845817 mustload.go:65] Loading cluster: multinode-123428
	I0804 09:35:36.747835 1845817 notify.go:220] Checking for updates...
	I0804 09:35:36.748283 1845817 config.go:182] Loaded profile config "multinode-123428": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
	I0804 09:35:36.748314 1845817 status.go:174] checking status of multinode-123428 ...
	I0804 09:35:36.748963 1845817 cli_runner.go:164] Run: docker container inspect multinode-123428 --format={{.State.Status}}
	I0804 09:35:36.767101 1845817 status.go:371] multinode-123428 host status = "Running" (err=<nil>)
	I0804 09:35:36.767129 1845817 host.go:66] Checking if "multinode-123428" exists ...
	I0804 09:35:36.767378 1845817 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-123428
	I0804 09:35:36.785730 1845817 host.go:66] Checking if "multinode-123428" exists ...
	I0804 09:35:36.786004 1845817 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 09:35:36.786062 1845817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-123428
	I0804 09:35:36.803705 1845817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/multinode-123428/id_rsa Username:docker}
	I0804 09:35:36.894265 1845817 ssh_runner.go:195] Run: systemctl --version
	I0804 09:35:36.898231 1845817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 09:35:36.908631 1845817 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0804 09:35:36.958990 1845817 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:63 SystemTime:2025-08-04 09:35:36.9502435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0804 09:35:36.959590 1845817 kubeconfig.go:125] found "multinode-123428" server: "https://192.168.67.2:8443"
	I0804 09:35:36.959623 1845817 api_server.go:166] Checking apiserver status ...
	I0804 09:35:36.959669 1845817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 09:35:36.970445 1845817 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2536/cgroup
	I0804 09:35:36.978799 1845817 api_server.go:182] apiserver freezer: "6:freezer:/docker/ecb230d0c3c37f63a31cba0ea962ee681ccb2dba9f73ac641452dbf0f0540db9/kubepods/burstable/podf784d46e85efbfa31b60eb52476ace45/65e1eec381f0ca899aefde815f319ba651f5417e6c727d78e8b2458570d306a7"
	I0804 09:35:36.978874 1845817 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ecb230d0c3c37f63a31cba0ea962ee681ccb2dba9f73ac641452dbf0f0540db9/kubepods/burstable/podf784d46e85efbfa31b60eb52476ace45/65e1eec381f0ca899aefde815f319ba651f5417e6c727d78e8b2458570d306a7/freezer.state
	I0804 09:35:36.986202 1845817 api_server.go:204] freezer state: "THAWED"
	I0804 09:35:36.986229 1845817 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0804 09:35:36.990414 1845817 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0804 09:35:36.990435 1845817 status.go:463] multinode-123428 apiserver status = Running (err=<nil>)
	I0804 09:35:36.990445 1845817 status.go:176] multinode-123428 status: &{Name:multinode-123428 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 09:35:36.990459 1845817 status.go:174] checking status of multinode-123428-m02 ...
	I0804 09:35:36.990689 1845817 cli_runner.go:164] Run: docker container inspect multinode-123428-m02 --format={{.State.Status}}
	I0804 09:35:37.007505 1845817 status.go:371] multinode-123428-m02 host status = "Running" (err=<nil>)
	I0804 09:35:37.007526 1845817 host.go:66] Checking if "multinode-123428-m02" exists ...
	I0804 09:35:37.007790 1845817 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-123428-m02
	I0804 09:35:37.024438 1845817 host.go:66] Checking if "multinode-123428-m02" exists ...
	I0804 09:35:37.024667 1845817 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 09:35:37.024699 1845817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-123428-m02
	I0804 09:35:37.040812 1845817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/21223-1578987/.minikube/machines/multinode-123428-m02/id_rsa Username:docker}
	I0804 09:35:37.126004 1845817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 09:35:37.136329 1845817 status.go:176] multinode-123428-m02 status: &{Name:multinode-123428-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0804 09:35:37.136375 1845817 status.go:174] checking status of multinode-123428-m03 ...
	I0804 09:35:37.136614 1845817 cli_runner.go:164] Run: docker container inspect multinode-123428-m03 --format={{.State.Status}}
	I0804 09:35:37.153771 1845817 status.go:371] multinode-123428-m03 host status = "Stopped" (err=<nil>)
	I0804 09:35:37.153791 1845817 status.go:384] host is not running, skipping remaining checks
	I0804 09:35:37.153799 1845817 status.go:176] multinode-123428-m03 status: &{Name:multinode-123428-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.09s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 node start m03 -v=5 --alsologtostderr
E0804 09:35:38.318198 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:35:41.678508 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-123428 node start m03 -v=5 --alsologtostderr: (8.095034746s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-123428
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-123428
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-123428: (22.464364742s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-123428 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-123428 --wait=true -v=5 --alsologtostderr: (53.940229143s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-123428
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-123428 node delete m03: (4.638364341s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-123428 stop: (21.356938394s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-123428 status: exit status 7 (83.447841ms)

                                                
                                                
-- stdout --
	multinode-123428
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-123428-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-123428 status --alsologtostderr: exit status 7 (85.04564ms)

                                                
                                                
-- stdout --
	multinode-123428
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-123428-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 09:37:29.074659 1861869 out.go:345] Setting OutFile to fd 1 ...
	I0804 09:37:29.074905 1861869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:37:29.074913 1861869 out.go:358] Setting ErrFile to fd 2...
	I0804 09:37:29.074917 1861869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0804 09:37:29.075074 1861869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21223-1578987/.minikube/bin
	I0804 09:37:29.075232 1861869 out.go:352] Setting JSON to false
	I0804 09:37:29.075263 1861869 mustload.go:65] Loading cluster: multinode-123428
	I0804 09:37:29.075362 1861869 notify.go:220] Checking for updates...
	I0804 09:37:29.075568 1861869 config.go:182] Loaded profile config "multinode-123428": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
	I0804 09:37:29.075586 1861869 status.go:174] checking status of multinode-123428 ...
	I0804 09:37:29.075975 1861869 cli_runner.go:164] Run: docker container inspect multinode-123428 --format={{.State.Status}}
	I0804 09:37:29.095411 1861869 status.go:371] multinode-123428 host status = "Stopped" (err=<nil>)
	I0804 09:37:29.095452 1861869 status.go:384] host is not running, skipping remaining checks
	I0804 09:37:29.095462 1861869 status.go:176] multinode-123428 status: &{Name:multinode-123428 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 09:37:29.095502 1861869 status.go:174] checking status of multinode-123428-m02 ...
	I0804 09:37:29.095815 1861869 cli_runner.go:164] Run: docker container inspect multinode-123428-m02 --format={{.State.Status}}
	I0804 09:37:29.112360 1861869 status.go:371] multinode-123428-m02 host status = "Stopped" (err=<nil>)
	I0804 09:37:29.112378 1861869 status.go:384] host is not running, skipping remaining checks
	I0804 09:37:29.112383 1861869 status.go:176] multinode-123428-m02 status: &{Name:multinode-123428-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.53s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-123428 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-123428 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (56.429608048s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123428 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.98s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-123428
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-123428-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-123428-m02 --driver=docker  --container-runtime=docker: exit status 14 (61.415398ms)

                                                
                                                
-- stdout --
	* [multinode-123428-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-123428-m02' is duplicated with machine name 'multinode-123428-m02' in profile 'multinode-123428'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-123428-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-123428-m03 --driver=docker  --container-runtime=docker: (24.844145302s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-123428
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-123428: exit status 80 (286.993234ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-123428 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-123428-m03 already exists in multinode-123428-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-123428-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-123428-m03: (2.080389681s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.32s)

                                                
                                    
x
+
TestPreload (110.7s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-721224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0804 09:39:03.492042 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:39:15.253914 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-721224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (57.143896244s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-721224 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-721224 image pull gcr.io/k8s-minikube/busybox: (2.838986785s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-721224
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-721224: (10.749678803s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-721224 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E0804 09:40:24.755129 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:40:41.678529 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-721224 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (37.585243071s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-721224 image list
helpers_test.go:175: Cleaning up "test-preload-721224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-721224
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-721224: (2.175527293s)
--- PASS: TestPreload (110.70s)

                                                
                                    
x
+
TestScheduledStopUnix (99.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-900417 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-900417 --memory=3072 --driver=docker  --container-runtime=docker: (26.164945686s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-900417 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-900417 -n scheduled-stop-900417
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-900417 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0804 09:41:14.455914 1582690 retry.go:31] will retry after 103.922µs: open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/scheduled-stop-900417/pid: no such file or directory
I0804 09:41:14.457115 1582690 retry.go:31] will retry after 112.551µs: open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/scheduled-stop-900417/pid: no such file or directory
I0804 09:41:14.458221 1582690 retry.go:31] will retry after 302.473µs: open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/scheduled-stop-900417/pid: no such file or directory
I0804 09:41:14.459383 1582690 retry.go:31] will retry after 222.09µs: open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/scheduled-stop-900417/pid: no such file or directory
I0804 09:41:14.460511 1582690 retry.go:31] will retry after 724.717µs: open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/scheduled-stop-900417/pid: no such file or directory
I0804 09:41:14.461658 1582690 retry.go:31] will retry after 786.692µs: open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/scheduled-stop-900417/pid: no such file or directory
I0804 09:41:14.462793 1582690 retry.go:31] will retry after 1.182549ms: open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/scheduled-stop-900417/pid: no such file or directory
I0804 09:41:14.465008 1582690 retry.go:31] will retry after 2.361944ms: open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/scheduled-stop-900417/pid: no such file or directory
I0804 09:41:14.468232 1582690 retry.go:31] will retry after 3.419577ms: open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/scheduled-stop-900417/pid: no such file or directory
I0804 09:41:14.472460 1582690 retry.go:31] will retry after 3.279939ms: open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/scheduled-stop-900417/pid: no such file or directory
I0804 09:41:14.476674 1582690 retry.go:31] will retry after 3.466699ms: open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/scheduled-stop-900417/pid: no such file or directory
I0804 09:41:14.480855 1582690 retry.go:31] will retry after 11.029965ms: open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/scheduled-stop-900417/pid: no such file or directory
I0804 09:41:14.491992 1582690 retry.go:31] will retry after 10.984974ms: open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/scheduled-stop-900417/pid: no such file or directory
I0804 09:41:14.503216 1582690 retry.go:31] will retry after 14.031933ms: open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/scheduled-stop-900417/pid: no such file or directory
I0804 09:41:14.517362 1582690 retry.go:31] will retry after 37.488739ms: open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/scheduled-stop-900417/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-900417 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-900417 -n scheduled-stop-900417
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-900417
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-900417 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-900417
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-900417: exit status 7 (69.617086ms)

                                                
                                                
-- stdout --
	scheduled-stop-900417
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-900417 -n scheduled-stop-900417
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-900417 -n scheduled-stop-900417: exit status 7 (66.332202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-900417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-900417
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-900417: (1.627201265s)
--- PASS: TestScheduledStopUnix (99.09s)

                                                
                                    
x
+
TestSkaffold (111.84s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2477517258 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-252449 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-252449 --memory=3072 --driver=docker  --container-runtime=docker: (26.582661315s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2477517258 run --minikube-profile skaffold-252449 --kube-context skaffold-252449 --status-check=true --port-forward=false --interactive=false
E0804 09:44:03.491684 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2477517258 run --minikube-profile skaffold-252449 --kube-context skaffold-252449 --status-check=true --port-forward=false --interactive=false: (1m6.716438023s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-67df78bc5b-28ppg" [8f231a0e-7b1f-4665-8e0c-55df728cd1da] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003419311s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5f794c4bb4-6nwx9" [6a46adf2-0a1a-4ab9-925e-ba49e50c2ee1] Running
E0804 09:44:15.254133 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004147879s
helpers_test.go:175: Cleaning up "skaffold-252449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-252449
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-252449: (2.717531569s)
--- PASS: TestSkaffold (111.84s)

                                                
                                    
x
+
TestInsufficientStorage (9.83s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-840172 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-840172 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.66711766s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"67f896d9-845f-4365-9697-6c0820b6b8cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-840172] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"86182452-fb56-4e34-83c2-e2fc1a5a2256","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21223"}}
	{"specversion":"1.0","id":"c37b2f55-14f1-4047-8812-757ae43ccf10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1fcf2963-2287-47b3-9d6e-78af6ecd922e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig"}}
	{"specversion":"1.0","id":"47359500-a822-40e7-967d-fb4de1ff8950","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube"}}
	{"specversion":"1.0","id":"a8ef6187-024e-43a4-9c9b-710ea460d4b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9674034a-8348-4663-9c77-64473abf0d1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bf86d8b0-12e7-4766-8532-b9c5d56cb613","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f9f895bb-e1f2-460c-9563-9d080dbf646e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b0dfdb95-2628-4271-b4bc-daa42a21c422","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ef3d9dd1-c60f-4f50-bc15-3b6798642bba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ee9ab27d-9379-4a44-abfc-d3a19c7ae49c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-840172\" primary control-plane node in \"insufficient-storage-840172\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"eabbe37a-531a-488e-bd68-3cff630a6d65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1753871403-21198 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"49b8fdf0-5846-4136-b1a7-4155ab311b09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c8a6859-6f6b-42d6-ade4-3a944c1f408b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-840172 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-840172 --output=json --layout=cluster: exit status 7 (250.656942ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-840172","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-840172","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 09:44:26.735139 1903613 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-840172" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-840172 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-840172 --output=json --layout=cluster: exit status 7 (258.772785ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-840172","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-840172","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 09:44:26.994398 1903709 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-840172" does not appear in /home/jenkins/minikube-integration/21223-1578987/kubeconfig
	E0804 09:44:27.003881 1903709 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/insufficient-storage-840172/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-840172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-840172
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-840172: (1.65577102s)
--- PASS: TestInsufficientStorage (9.83s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (101.63s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3538380918 start -p running-upgrade-088433 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3538380918 start -p running-upgrade-088433 --memory=3072 --vm-driver=docker  --container-runtime=docker: (50.102925137s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-088433 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-088433 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (45.678739583s)
helpers_test.go:175: Cleaning up "running-upgrade-088433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-088433
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-088433: (2.628899334s)
--- PASS: TestRunningBinaryUpgrade (101.63s)

                                                
                                    
x
+
TestMissingContainerUpgrade (199.57s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3413155412 start -p missing-upgrade-385779 --memory=3072 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3413155412 start -p missing-upgrade-385779 --memory=3072 --driver=docker  --container-runtime=docker: (2m11.839912599s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-385779
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-385779: (10.480212135s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-385779
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-385779 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-385779 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (51.134666316s)
helpers_test.go:175: Cleaning up "missing-upgrade-385779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-385779
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-385779: (2.309369696s)
--- PASS: TestMissingContainerUpgrade (199.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (194.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.954930165 start -p stopped-upgrade-484805 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.954930165 start -p stopped-upgrade-484805 --memory=3072 --vm-driver=docker  --container-runtime=docker: (2m32.996290734s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.954930165 -p stopped-upgrade-484805 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.954930165 -p stopped-upgrade-484805 stop: (12.039197066s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-484805 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-484805 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.027025227s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (194.06s)

                                                
                                    
x
+
TestPause/serial/Start (76.03s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-997625 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-997625 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m16.024957341s)
--- PASS: TestPause/serial/Start (76.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-484805
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-484805: (1.170419725s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-653834 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-653834 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (64.802928ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-653834] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21223-1578987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21223-1578987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (31.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-653834 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-653834 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (31.378396131s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-653834 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (31.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-653834 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-653834 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (15.519285102s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-653834 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-653834 status -o json: exit status 2 (290.586552ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-653834","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-653834
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-653834: (1.682839145s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-653834 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-653834 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (7.334646566s)
--- PASS: TestNoKubernetes/serial/Start (7.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-653834 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-653834 "sudo systemctl is-active --quiet service kubelet": exit status 1 (252.83893ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (30.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (29.548596857s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E0804 09:49:15.254074 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:49:15.341518 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.080370424s)
--- PASS: TestNoKubernetes/serial/ProfileList (30.63s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (75.76s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-997625 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-997625 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m15.739275603s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (75.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-653834
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-653834: (1.264805499s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-653834 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-653834 --driver=docker  --container-runtime=docker: (8.39471471s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-653834 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-653834 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.238524ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (115.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-304259 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-304259 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (1m55.904807489s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (115.90s)

                                                
                                    
x
+
TestPause/serial/Pause (0.57s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-997625 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.57s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-997625 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-997625 --output=json --layout=cluster: exit status 2 (295.557887ms)

                                                
                                                
-- stdout --
	{"Name":"pause-997625","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-997625","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-997625 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.73s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.73s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-997625 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.73s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.47s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-997625 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-997625 --alsologtostderr -v=5: (2.469712595s)
--- PASS: TestPause/serial/DeletePaused (2.47s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (16.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (16.260295433s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-997625
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-997625: exit status 1 (16.829521ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-997625: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (16.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (76.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-876579 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.3
E0804 09:50:26.569930 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:50:27.026519 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:50:41.677874 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-876579 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.3: (1m16.152802135s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (76.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-876579 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [10466840-d217-4b4c-b26b-e3eb01c0af03] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [10466840-d217-4b4c-b26b-e3eb01c0af03] Running
E0804 09:51:48.948787 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.0034898s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-876579 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-876579 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-876579 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-876579 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-876579 --alsologtostderr -v=3: (10.74910399s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-304259 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ce63df03-ec0c-47aa-9912-f8d7897a3c86] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ce63df03-ec0c-47aa-9912-f8d7897a3c86] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003370514s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-304259 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-876579 -n embed-certs-876579
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-876579 -n embed-certs-876579: exit status 7 (94.187506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-876579 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-876579 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-876579 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.3: (52.144871167s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-876579 -n embed-certs-876579
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-304259 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-304259 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-304259 --alsologtostderr -v=3
E0804 09:52:18.320541 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-304259 --alsologtostderr -v=3: (10.798824088s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-304259 -n old-k8s-version-304259
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-304259 -n old-k8s-version-304259: exit status 7 (75.623869ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-304259 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (114.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-304259 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-304259 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (1m53.95639788s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-304259 -n old-k8s-version-304259
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (114.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-rq2wn" [ce8ffa3c-3d2b-4d1a-93e7-cfd0e52e0677] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003308429s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-rq2wn" [ce8ffa3c-3d2b-4d1a-93e7-cfd0e52e0677] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003178763s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-876579 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-876579 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
I0804 09:53:08.425062 1582690 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm.sha256
I0804 09:53:08.816394 1582690 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm.sha256
I0804 09:53:09.219741 1582690 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (1.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-876579 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-876579 -n embed-certs-876579
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-876579 -n embed-certs-876579: exit status 2 (363.649144ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-876579 -n embed-certs-876579
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-876579 -n embed-certs-876579: exit status 2 (298.465024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-876579 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-876579 -n embed-certs-876579
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-876579 -n embed-certs-876579
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-670157 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.3
E0804 09:54:03.491856 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:54:05.086964 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-670157 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.3: (1m8.97750953s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-k54gt" [7464a0c1-0979-440c-9558-03e81b87bf8d] Running
E0804 09:54:15.253741 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-699837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00343682s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-k54gt" [7464a0c1-0979-440c-9558-03e81b87bf8d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002777169s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-304259 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-304259 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-304259 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-304259 -n old-k8s-version-304259
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-304259 -n old-k8s-version-304259: exit status 2 (280.348799ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-304259 -n old-k8s-version-304259
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-304259 -n old-k8s-version-304259: exit status 2 (286.535071ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-304259 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-304259 -n old-k8s-version-304259
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-304259 -n old-k8s-version-304259
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-670157 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1d85f9d9-b1dd-445d-bbec-68f4f3d366c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1d85f9d9-b1dd-445d-bbec-68f4f3d366c4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003114011s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-670157 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-670157 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-670157 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-670157 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-670157 --alsologtostderr -v=3: (10.717336374s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-670157 -n default-k8s-diff-port-670157
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-670157 -n default-k8s-diff-port-670157: exit status 7 (73.256676ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-670157 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-670157 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.3
E0804 09:55:41.678555 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-670157 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.3: (54.105095137s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-670157 -n default-k8s-diff-port-670157
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jbwkj" [709cded3-014f-4d88-9e71-4556e73a3589] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003372856s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jbwkj" [709cded3-014f-4d88-9e71-4556e73a3589] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003603076s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-670157 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-670157 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
I0804 09:56:16.747259 1582690 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm.sha256
I0804 09:56:17.172534 1582690 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm.sha256
I0804 09:56:17.585807 1582690 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-670157 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-670157 -n default-k8s-diff-port-670157
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-670157 -n default-k8s-diff-port-670157: exit status 2 (280.334989ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-670157 -n default-k8s-diff-port-670157
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-670157 -n default-k8s-diff-port-670157: exit status 2 (278.277093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-670157 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-670157 -n default-k8s-diff-port-670157
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-670157 -n default-k8s-diff-port-670157
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0804 09:56:58.364373 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:56:58.370754 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:56:58.382138 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:56:58.403483 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:56:58.444846 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:56:58.526266 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:56:58.687829 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:56:59.009524 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:56:59.651572 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:57:00.933164 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:57:03.494974 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:57:04.756935 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:57:08.617206 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:57:18.858837 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m9.823204103s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-561540 "pgrep -a kubelet"
I0804 09:57:32.833468 1582690 config.go:182] Loaded profile config "auto-561540": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-561540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-5np6c" [7daf3fcf-70de-48d2-81f6-221d0c83d127] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-5np6c" [7daf3fcf-70de-48d2-81f6-221d0c83d127] Running
E0804 09:57:39.340407 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004079198s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-561540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-561540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-561540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m0.835765494s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0804 09:58:20.302375 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (52.059169338s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-561540 "pgrep -a kubelet"
I0804 09:58:53.722954 1582690 config.go:182] Loaded profile config "custom-flannel-561540": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-561540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-5gvzj" [7df8955c-bb3f-4491-9666-df4030490a21] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-5gvzj" [7df8955c-bb3f-4491-9666-df4030490a21] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004357862s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dn554" [0e81a64a-cd53-4330-a19f-477c0cc54114] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003770918s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-561540 "pgrep -a kubelet"
I0804 09:59:01.761707 1582690 config.go:182] Loaded profile config "calico-561540": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-561540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zg2c5" [d7940691-bf62-4638-b9ed-c254b72b75f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zg2c5" [d7940691-bf62-4638-b9ed-c254b72b75f0] Running
E0804 09:59:05.086897 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.003855724s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-561540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-561540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-561540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-561540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-561540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-561540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (82.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m22.337402938s)
--- PASS: TestNetworkPlugins/group/false/Start (82.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (60.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0804 09:59:42.224003 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:59:50.237730 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:59:50.244176 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:59:50.255527 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:59:50.276873 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:59:50.318279 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:59:50.399755 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:59:50.561281 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:59:50.882987 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:59:51.524696 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:59:52.805960 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 09:59:55.367613 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:00:00.489747 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:00:10.731544 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m0.308618434s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (60.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-448jw" [6b86eb48-05a3-4eb1-b6d1-8c8d4c09dfc5] Running
E0804 10:00:31.212991 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003325336s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-561540 "pgrep -a kubelet"
I0804 10:00:36.930785 1582690 config.go:182] Loaded profile config "kindnet-561540": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-561540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-h92jq" [f4e14a00-d0c7-401a-9c52-16bd8b640926] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-h92jq" [f4e14a00-d0c7-401a-9c52-16bd8b640926] Running
E0804 10:00:41.677950 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003400047s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-561540 "pgrep -a kubelet"
I0804 10:00:43.773882 1582690 config.go:182] Loaded profile config "false-561540": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-561540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-skv4r" [b7c0b159-f71c-4775-9bcd-1bceef253e6b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-skv4r" [b7c0b159-f71c-4775-9bcd-1bceef253e6b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.004495192s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-561540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-561540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-561540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-561540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-561540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-561540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (79.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m19.342145885s)
--- PASS: TestNetworkPlugins/group/flannel/Start (79.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m11.753086679s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-561540 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-brz84" [a3e4ac27-7393-4ef0-8268-f9c6d3c96e67] Running
I0804 10:02:24.376816 1582690 config.go:182] Loaded profile config "enable-default-cni-561540": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003570563s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-561540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-5c9js" [118eacf5-452e-4cb5-8cf7-f411e5d9fba9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0804 10:02:26.065424 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:344: "netcat-5d86dc444-5c9js" [118eacf5-452e-4cb5-8cf7-f411e5d9fba9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003354687s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-561540 "pgrep -a kubelet"
I0804 10:02:30.505026 1582690 config.go:182] Loaded profile config "flannel-561540": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-561540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-62txm" [b74d987d-7c10-42cb-af06-d13fc1b09637] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0804 10:02:33.013128 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:02:33.019487 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:02:33.030824 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:02:33.052170 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:02:33.093583 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:02:33.175015 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:02:33.336535 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:344: "netcat-5d86dc444-62txm" [b74d987d-7c10-42cb-af06-d13fc1b09637] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003748236s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-561540 exec deployment/netcat -- nslookup kubernetes.default
E0804 10:02:33.658343 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-561540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-561540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-561540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-561540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-561540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (68.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0804 10:02:53.506614 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m8.29969579s)
--- PASS: TestNetworkPlugins/group/bridge/Start (68.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (64.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0804 10:03:13.988242 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-561540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m4.887582193s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (64.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-499486 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-499486 --alsologtostderr -v=3: (1.185974109s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499486 -n no-preload-499486
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499486 -n no-preload-499486: exit status 7 (70.889448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-499486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-561540 "pgrep -a kubelet"
I0804 10:04:00.401214 1582690 config.go:182] Loaded profile config "bridge-561540": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-561540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-k4j22" [94cc9219-df09-4f23-9e70-85a08223e88f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0804 10:04:00.628409 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:04:03.491563 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:344: "netcat-5d86dc444-k4j22" [94cc9219-df09-4f23-9e70-85a08223e88f] Running
E0804 10:04:04.145493 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003331556s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-561540 "pgrep -a kubelet"
I0804 10:04:04.975968 1582690 config.go:182] Loaded profile config "kubenet-561540": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.3
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-561540 replace --force -f testdata/netcat-deployment.yaml
E0804 10:04:05.086951 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7zgmf" [a5accebf-c854-448d-bafb-885951b130d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0804 10:04:05.750207 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:344: "netcat-5d86dc444-7zgmf" [a5accebf-c854-448d-bafb-885951b130d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.003407787s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-561540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-561540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-561540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-561540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-561540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-561540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0804 10:04:14.387340 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)
E0804 10:04:34.868925 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:04:36.473914 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:04:50.236989 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:15.830878 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:16.872257 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:17.435227 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:17.938843 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/default-k8s-diff-port-670157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:28.152045 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/skaffold-252449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:30.664058 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:30.670444 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:30.681710 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:30.703117 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:30.744536 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:30.825986 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:30.987511 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:31.309190 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:31.951487 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:33.233264 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:35.794855 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:40.916589 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:41.677827 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/functional-114794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:43.934400 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:43.940762 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:43.952167 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:43.973538 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:44.014941 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:44.096391 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:44.258095 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:44.579971 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:45.222065 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:46.504469 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:49.066333 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:51.158341 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:05:54.188348 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:06:04.430713 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:06:11.640446 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:06:24.912786 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:06:37.752890 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/custom-flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:06:39.357514 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/calico-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:06:52.602705 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:06:58.365106 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/old-k8s-version-304259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:05.874381 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:06.572271 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/addons-309866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:24.251049 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:24.257422 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:24.268755 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:24.290182 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:24.331583 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:24.413006 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:24.543520 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:24.549939 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:24.561317 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:24.574654 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:24.583052 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:24.624443 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:24.705869 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:24.867394 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:24.896790 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:25.188732 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:25.538699 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:25.830283 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:26.821016 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:27.111622 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:29.382876 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:29.673580 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:33.012785 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:34.505060 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:34.795825 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:44.746867 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:07:45.038037 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:08:00.713967 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/auto-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:08:05.228700 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/flannel-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:08:05.519617 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/enable-default-cni-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:08:14.525015 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kindnet-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0804 10:08:27.795723 1582690 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/false-561540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-768931 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-768931 --alsologtostderr -v=3: (1.187476021s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-768931 -n newest-cni-768931
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-768931 -n newest-cni-768931: exit status 7 (169.6391ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-768931 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-768931 image list --format=json
I0804 10:08:47.822041 1582690 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
I0804 10:08:48.213112 1582690 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
I0804 10:08:48.596290 1582690 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0-beta.0/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (1.38s)

                                                
                                    

Test skip (28/431)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.33.3/cached-images 0
15 TestDownloadOnly/v1.33.3/binaries 0
16 TestDownloadOnly/v1.33.3/kubectl 0
23 TestDownloadOnly/v1.34.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.34.0-beta.0/binaries 0
25 TestDownloadOnly/v1.34.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
63 TestDockerEnvContainerd 0
65 TestHyperKitDriverInstallOrUpdate 0
66 TestHyperkitDriverSkipUpgrade 0
118 TestFunctional/parallel/PodmanEnv 0
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
146 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
147 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
214 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PodmanEnv 0
257 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
258 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
259 TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
264 TestGvisorAddon 0
293 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
357 TestChangeNoneUser 0
360 TestScheduledStopWindows 0
376 TestStartStop/group/disable-driver-mounts 0.15
394 TestNetworkPlugins/group/cilium 5.32
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.33.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.33.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.33.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.34.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-172887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-172887
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-561540 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-561540

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-561540

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-561540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-561540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-561540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-561540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-561540

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-561540

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-561540

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-561540

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-561540

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-561540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-561540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-561540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-561540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-561540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-561540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-561540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-561540" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-561540

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-561540

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-561540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-561540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-561540

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-561540

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-561540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-561540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-561540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-561540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-561540" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21223-1578987/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 04 Aug 2025 09:45:31 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-402519
contexts:
- context:
cluster: kubernetes-upgrade-402519
user: kubernetes-upgrade-402519
name: kubernetes-upgrade-402519
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-402519
user:
client-certificate: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubernetes-upgrade-402519/client.crt
client-key: /home/jenkins/minikube-integration/21223-1578987/.minikube/profiles/kubernetes-upgrade-402519/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-561540

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-561540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561540"

                                                
                                                
----------------------- debugLogs end: cilium-561540 [took: 5.139495918s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-561540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-561540
--- SKIP: TestNetworkPlugins/group/cilium (5.32s)

                                                
                                    
Copied to clipboard